Theses & Reports
Instructions for submitting a technical report or thesis.
You can find technical reports published prior to 1990 archived here.
-
Ph.D. Thesis
2025
On The Applications of Coarse Network Geometry to Personalized Immuno-Oncology
Bannon, James
Abstract
|
PDF
Title: On The Applications of Coarse Network Geometry to Personalized Immuno-Oncology
Candidate: Bannon, James
Advisor(s): Bud Mishra
Abstract:
Immune checkpoint inhibitors (ICIs), also called immune checkpoint blockers, are a promising category of targeted therapy for solid tumors. Predicting which patients will respond to ICI therapy remains an open problem under active investigation. This thesis aims to improve the precision with which immune checkpoint inhibitors are prescribed. By focusing on one type of biological measurement --- whole-tumor shotgun RNA sequencing data, which we call \textit{bulk RNA-seq} --- we are able to deeply explore the potential and limits of predictors built from this kind of measurement. Two of the algorithms presented here are based on a notion of graph curvature which we believe has extensive promise in bioinformatic inquiry.
The first part of this thesis performs a rigorous permutation testing evaluation of machine learning models for the task of predicting therapy response which we cast as a binary classification problem. We show that bulk RNA-seq data contains predictive signal but that there is an upper limit to ML model efficacy that can potentially be remedied by the curation of larger data sets or augmenting RNA-seq data with other biological measurements.
The next part presents a modular pipeline for the discovery of biomarkers from bulk RNA-seq data. We contextualize gene expression measurements using a protein-protein interaction (PPI) network and then use a notion of graph curvature to find (pairs of) genes in the PPI network that could serve as potential biomarkers. Our candidate biomarkers are evaluated using an extensive literature search and transfer learning experiments. We also provide a harmonized collection of drug-specific candidate markers found through rank aggregation that we believe merit further study.
Lastly, we cluster patients in an unsupervised manner using discrete Ollivier-Ricci Flow (ORF). Our method surfaces populations with distinct survival curves which in turn allows us to find many potential biomarkers, including gene expression modules. We believe the algorithm may be of independent interest for clustering other datasets in a diverse set of research areas.
As a result of the work here we have provided novel algorithmic techniques for analyzing (biological) data and advanced the state of the art in finding biomarkers for ICI therapy. -
Ph.D. Thesis
2025
Language Models at the Scale of Evolution
Rives, Alexander
Abstract
|
PDF
Title: Language Models at the Scale of Evolution
Candidate: Rives, Alexander
Advisor(s): Rob Fergus, Yann LeCun
Abstract:
I will describe the development of the evolutionary scale modeling (ESM) program, which proposes to solve an inverse problem across evolution to learn the biology of proteins from their sequences at the scale of life. Beginning from the idea that the sequences of proteins contain an image of biology in their patterns, this thesis shows that language models trained on protein sequences spanning the natural diversity of the Earth, by learning to predict which amino acids evolution chooses, develop feature spaces that reflect the immense scope and complexity of protein biology containing known and unknown biology. Biological structure and function emerge in the representations of the models. This emergence is shown to occur in a direct linkage with improvements in the language modeling of sequences. The representation space has an ordered structure in which proteins are organized according to their underlying biology, and directions correspond to meaningful biological variations. Attention patterns materialize in the neural network that correspond to the folded three-dimensional structure of proteins. The probabilities assigned to amino acids within a given sequence context, reflect protein function and predict the effects of mutations. The representations learned by protein language models constitute a general and transferable feature space which supports the discovery and generation of new biology. This has enabled an effort to reveal the structures of hundreds of millions of metagenomic proteins for the first time. The thesis concludes with experimental characterizations of proteins created by language models, which demonstrate that the feature space learned from natural proteins supports generating proteins beyond those in nature.
-
Ph.D. Thesis
2025
An Explicit Certified Method for Path Planning Problem of an SE(3) Robot
Zhang, Zhaoqi
Abstract
|
PDF
Title: An Explicit Certified Method for Path Planning Problem of an SE(3) Robot
Candidate: Zhang, Zhaoqi
Advisor(s): Chee Yap
Abstract:
The design and implementation of theoretically-sound robot motion planning algorithms is challenging, especially for robots with high degrees of freedom (DOF). This thesis presents an explicit, practical and certified path planner for a rigid spatial robot with 6 DOFs. The robot is a spatial triangle moving amidst polyhedral obstacles. Correct, complete and practical path planners for such a robot has never been achieved. It is widely recognized as a key challenge in robotics. We design such a planner by using the Soft Subdivision Search (SSS) framework, based on the twin foundations of ε-exactness and soft predicates. This SSS planner is a theoretical alternative to the standard exact algorithms, and provides much stronger guarantees than probabilistic or sampling algorithms.
In this thesis, we address technical challenges for the SE(3) robot. First, we establish the foundational theory of SSS framework by proving a general form of the Fundamental Theorem of SSS. Second, we introduce a topologically correct data structure for non-Euclidean path planning in the SE(3) space. Third, we analyze the distortion bound of the SE(3) representation. Fourth, we design an approximate footprint and combine it with the highly efficient feature set technique which leads to its soft predicate. Finally, we explicitly design the geometric primitives to avoid using a general solver of a polynomial system. This allows a direct implementation. These contributions represent a robust, practical, and adaptable solution to robot motion planning. -
Ph.D. Thesis
2024
On Efficient Instantiations of Secure Multi-Party Computation in Practice
Bienstock, Alexander
Abstract
|
PDF
Title: On Efficient Instantiations of Secure Multi-Party Computation in Practice
Candidate: Bienstock, Alexander
Advisor(s): Yevgeniy Dodis/Marshall Ball
Abstract:
Secure Multi-Party Computation (MPC) is an area of cryptography that has been studied extensively since the 1980s. In full generality, MPC allows a set of mutually distrusting parties to privately compute a function of their inputs. That is, the parties interact in some protocol, and at the end obtain the output of the function, and nothing else. In the decades since the inception of MPC, great strides have been made towards making it more efficient. However, despite this progress, the use of MPC in practice still faces some shortcomings.
In this thesis, we take steps to mitigate two such shortcomings. The first deficiency we study is related to the communication networks in which such MPC protocols operate. MPC protocols are usually designed assuming that all parties have pairwise secure communication channels which are stable; i.e., nodes never crash, messages always arrive on time, etc. However, in the real-world, this is rarely the case—it is hard to sustain a stable connection between parties over long periods of time. One such model that has been introduced to address this deficiency is called Fluid MPC (Choudhuri et al., CRYPTO 2021). In this model, parties are not mandated to stay online for long periods of time. Instead, parties come online for short periods of time and work together in committees to compute some function. The benefit is that individual committees are much more likely to be able to sustain stable connections for these shorter interactions. However, existing protocols in this model do not match the level of efficiency that is obtained by traditional MPC protocols. In the first part of this thesis, we study Fluid MPC, and in particular, introduce Fluid MPC protocols with efficiency that matches those of traditional MPC.
The second deficiency of MPC which we study in this thesis is that general-purpose protocols often are still not efficient enough to be used in practice. One way to resolve this is by using protocols that are tailor-made for specific applications. One such application that has gained recent attention is called Private Join and Compute (PJC). In this application, two parties come together with input sets and associated values for each item in their sets. The goal is to privately compute a function over the associated values of the intersection of the two sets. In practice, the size of the intersection is quite small, and therefore the private computation of the intersection is actually much more expensive than whatever computation that needs to be done over it. In the second part of this thesis, we improve the efficiency of tailor-made state-of-the-art protocols that are used to privately compute the intersection, thus improving the efficiency of prior PJC protocols. -
Ph.D. Thesis
2024
Generative modeling and Stochastic Control as Dynamics on Probability Distributions
Domingo i Enrich, Carles
Abstract
|
PDF
Title: Generative modeling and Stochastic Control as Dynamics on Probability Distributions
Candidate: Domingo i Enrich, Carles
Advisor(s): Joan Bruna
Abstract:
Several modern machine learning algorithms can be studied from the perspective of evolution dynamics on the space of probability measures. Gradient descent-ascent algorithms that are used to solve minimax problems such as the ones arising in generative adversarial networks (GANs) can be interpreted as a joint evolution of two measures: one over the space of parameters of the generator, and one over the space of parameters of the discriminator.
In the first chapter of the thesis, I study systems of this form, and provide convergence guarantees when possible. Diffusion models, which are another generative modeling technique, are also based on dynamics on probability measures, in this case over the space of samples. The dynamics are simulated at inference time; the starting distribution is a Gaussian, and the final distribution is meant to be the target data distribution. Diffusion models were generalized by the Flow Matching framework, which allows to construct different paths between the Gaussian noise distribution and the data distribution.
In the second part, I introduce Multisample Flow Matching, which is a generalization of Flow Matching with intimate connections to optimal transport. Stochastic optimal control is a third problem where dynamics on
measures play a critical role. The goal is to learn a vector field (the control) in order to drive the behavior of the solutions of a stochastic differential equation.
In the third chapter, I present Stochastic Optimal Control Matching, a least-squares loss that is based on the same principles that are used to formulate diffusion model losses, and which achieves errors that are an order of magnitude lower than for existing methods.
The talk will cover the second and third chapters. -
Ph.D. Thesis
2024
Solver-Aided Compiler Design for Programmable Network Devices
Gao, Xiangyu
Abstract
|
PDF
Title: Solver-Aided Compiler Design for Programmable Network Devices
Candidate: Gao, Xiangyu
Advisor(s): Anirudh Sivaraman, Srinivas Narayana
Abstract:
Historically, network devices were mostly fixed-function ones. They could run at a line rate of one network packet per nanosecond, but it was impossible to support newly developed network algorithms without upgrading the device. The emergence of programmable network devices remedies this drawback. These devices use reconfigurable match table model to enable programmability and provide more flexibility for developers to continue updating and adding new algorithms to the device. People developed several programming languages to write programs for these devices. Even though it is not hard to get started with writing packet-processing code, writing programs that can fit within the target devices’ various resource constraints is not an easy job. The root cause for that is the lack of optimizing compilers in this domain. Hence, this thesis focuses on optimizing compiler design for domain-specific network accelerators using solver-aided techniques that can generate better compilation results compared with state-of-the-art compilers.
First, we build the Chipmunk compiler that does code generation for stateful transactions into programmable switches using program synthesis. We frame the compilation problem as a solution-searching problem and use a program synthesis engine, SKETCH, to find a semantically equivalent compilation outcome. Additionally, we also develop a series of algorithms to speed up the compilation process. We find that the Chipmunk compiler can generate better compilation results in terms of hardware resource usage within a reasonable time period.
Second, we build the CaT compiler that does both code generation and resource allocation into the packet-processing pipeline using solver-aided technologies. We decompose the compilation problem for such pipelines into three phases—making extensive use of solver engines to simplify the development of these phases. We also incorporate some heuristics for further resource usage optimization. We observe that the CaT compiler can generate near-optimal compilation results at a much faster speed than Chipmunk.
Third, we build the Polyglotter compiler that outputs programs for target hardware devices from input programs written for source hardware devices in the parser portion. This compiler unifies features across different programming languages and reduces the efforts required to write algorithms across platforms. We discover that the Polyglotter compiler can generate correct transpilation results with better hardware resource usage.
To our best knowledge, we for the first time propose to incorporate solver-aided techniques into compiler design for programmable network devices. In the domain of programmable network devices, these compilers can outperform traditional compilers that rely on program rewrite rules. Our contributions are beyond just building solver-aided compilers and include domain specific algorithms to speed up the whole compilation process. Based on these developed compilers, we explore several useful aspects of solver-aided techniques and hope to extend them into more applications in future works.
-
Ph.D. Thesis
2024
Predictive and Generative Models of Protein Sequence and Structure
Lin, Zeming
Abstract
|
PDF
Title: Predictive and Generative Models of Protein Sequence and Structure
Candidate: Lin, Zeming
Advisor(s): Yann LeCun
Abstract:
Historically, protein engineering has predominantly involved a bottom-up strategy, utilizing naturally occurring components as the building blocks. However, the problem of designing arbitrary protein sequences and structures for specific problems present significant challenges due to the complexity of biological systems. In this work, we tackle the problem of developing models of protein sequences and structures for prediction and generation. We show that neural networks can learn the patterns inherent to these systems and provide results for modeling protein through predicting protein structures from a given sequence and vice versa. Generative models can also model the unconditional distributions of protein sequence and
structure.
To model protein structures, we present an autoencoder architecture that can produce a wide array of protein backbones to model protein structures. These structures exhibit both local and global coherence in terms of secondary and tertiary structures. Using classical techniques to design sequences that fold to generated backbones, we show that the model can generate novel sequences which are validated in-silico. To generate better sequences for these backbones, we then present ESM-IF1, a model for fixed backbone protein design. We designed a large-scale system to predict millions of structures using AlphaFold. By training on the synthetic data, we were able to obtain state of the art results and obtain over 50% sequence recovery.
We then scale large protein language models to 15 billion parameters (ESM-2) as an unconditional model of protein sequences. ESM-2 is capable of replacing multiple sequence alignment (MSA) features to obtain nearly state-of-the-art structure prediction results from a single sequence Removing MSA features gives a 60x speed up, allowing us to catalog the largest database of predicted protein structures. We open-sourced the ESM Metagenomic Atlas, a database of over 225 million high-confidence predicted structures, giving us an unprecedented view into the vast breadth and diversity of natural proteins. Finally, the speed and single sequence nature of our model allows us to directly optimize the protein sequence with respect to the protein structure. We show that black box optimization techniques can enable the design of proteins with structural constraints as symmetry, scaffolding, and binding. In sum, we present a series of models that are able to model the conditional and unconditional distributions of protein sequence and structure. -
Ph.D. Thesis
2024
Towards Responsible AI: Safeguarding Privacy, Integrity, and Fairness
Mirza, Muhammad Shujaat
Abstract
|
PDF
Title: Towards Responsible AI: Safeguarding Privacy, Integrity, and Fairness
Candidate: Mirza, Muhammad Shujaat
Advisor(s): Prof. Christina Pöpper
Abstract:
The widespread adoption of Artificial Intelligence (AI) into digital platforms, spanning general-purpose applications such as chatbots, professional tools like code generation, and high-risk domains like healthcare, has profoundly transformed user experiences. However, this rapid integration has also brought to the forefront critical concerns surrounding privacy, integrity, and fairness. This thesis systematically investigates these three interconnected challenges through comprehensive investigations revealing vulnerabilities and proposes approaches to address them, contributing to the responsible development of AI technologies.
In addressing privacy concerns, we focus on managing personal information exposure in an era where digital data persists indefinitely. We begin with a global longitudinal analysis of privacy narratives to contextualize the evolving landscape of privacy concerns. Next, we systematically develop a semi-automated pipeline to assess the risks of training data extraction from large language models (LLMs), particularly those used for code generation such as Github Copilot. We demonstrate the feasibility of leaking various types of sensitive personal information, including email addresses, medical records, and passwords. Finally, we undertake a comprehensive systematization of privacy-enhancing technologies for exposure management, bridging gaps between technical solutions and user needs. We identify key discrepancies and propose actionable strategies for aligning technical solutions with user expectations. These findings lay the groundwork for user-centric privacy solutions that effectively address data persistence challenges.
To tackle threats to information integrity, we focus on the potential misuse of generative AI tools and coordinated disinformation campaigns. We conduct a detailed evaluation of factual accuracy of frontier LLMs, such as the GPT series, in the zero-shot classification setting. By comparing different model versions we uncover inconsistencies in performance improvements, with GPT-4's March release outperforming its June counterpart. Next, we develop a novel cybersecurity-inspired framework for characterizing disinformation threats, profiling threat actors, attack patterns, targets, and channels. We validate our framework's effectiveness through case studies of real-world disinformation campaigns, highlighting its potential to strengthen the integrity of online information ecosystems and laying the groundwork for potential automated threat-scoring systems.
Lastly, we address fairness in machine learning systems by identifying biases that reinforce inequalities. We introduce Global-Liar, a novel dataset uniquely balanced in terms of geographic representation, facilitating a more nuanced factuality evaluation of LLM biases across different regions. Using this dataset, we conduct a rigorous evaluation of general-purpose LLMs, revealing significant disadvantages faced by the Global South. Next, we conduct thorough investigation into fairness in high-risk computer vision models used for medical diagnosis in healthcare. Our assessment reveals significant racial and sex biases in kidney and tumor segmentation tasks. We investigate a range of bias mitigation approaches, from pre-processing techniques, like stratified batch sampling, to algorithmic interventions, like fair meta-learning. Notably, our findings suggest that architectural choices play a significant role in bias reduction, emphasizing the necessity of careful design and thorough evaluation of model architectures.
In summary, our findings and proposed solutions in privacy, integrity, and fairness contribute to responsible AI development, aiming to democratize its benefits across all constituencies. -
Ph.D. Thesis
2024
Learning from Rewards in Text Generation
Pang, Richard Yuanzhe
Abstract
|
PDF
Title: Learning from Rewards in Text Generation
Candidate: Pang, Richard Yuanzhe
Advisor(s): He He, Kyunghyun Cho
Abstract:
The progress in text generation comes from every stage in the pipeline: problem definition, data curation, learning, decoding, and evaluation. This dissertation focuses on learning. There is a mismatch between traditional training objectives and evaluation objectives: regular maximum likelihood estimation tries to minimize the cross-entropy loss with respect to each sample in the dataset, but the downstream evaluation is often based on a reward that scores the compatibility of the input-output pair (e.g., human judgments of the output). I aim to bridge this gap by optimizing for the reward of the generated text directly.
The talk is composed of the following components. (1) Rewards could be expensive to obtain. To tackle this challenge in the social dialogue setting, we extract implicit signals from deployment data without extra human annotations. (2) The model could make slow or no progress in learning, and one idea is to obtain denser and high-quality rewards. In neural machine translation, we define a reward inspired by noisy channel decoding which has a long history, and we are able to increase decoding speed significantly while ensuring similar translation quality. (3) Another way to make progress in learning is to innovate on training algorithms instead. We set the rewards to be based on the simple exact match of generations and references, but algorithm-wise we explore the extreme case where we do not deviate too far from references by framing text generation as an offline reinforcement learning (RL) problem. We propose generation by off-policy learning from demonstrations (GOLD) using importance weighting. Our generations outperform those trained by MLE and policy gradient on a range of tasks. (4) We show that we do not need to rely on RL using a few reasoning tasks (e.g., math, science, commonsense) as the testbed. We develop an approach called iterative reasoning preference optimization (IRPO) that optimizes for winning vs. losing reasoning chain-of-thoughts, using modified direct preference optimization as the criteria. IRPO results in markedly increased accuracies compared to a range of baselines.
To conclude the talk, I will discuss the future directions of using large language models as rewards. I will briefly mention the initial promise given by our work on self-rewarding language models using LLM-based rewards with a learning algorithm connected to that in IRPO; the discussion is then followed by the corresponding challenges and next steps. I will also touch on human–AI collaboration – an additional way to improve LLM evaluation capabilities. -
Ph.D. Thesis
2024
Verification of Concurrent Search Structures
Patel, Nisarg
Abstract
|
PDF
Title: Verification of Concurrent Search Structures
Candidate: Patel, Nisarg
Advisor(s): Prof. Thomas Wies
Abstract:
Concurrent search structures are a class of concurrent data structures that implement a key-value store. Concurrent search structures are integral components of modern software systems, yet they are notoriously difficult to design and implement. In the context of concurrency, linearizability is the accepted notion of correctness of a data structure. Verifying linearizability of concurrent search structures remains a formidable challenge due to the inherent complexity of the underlying algorithms. So far, verification of these data structures has often led to large, intricate proofs that are hard to comprehend and reuse.
The concrete contribution of the thesis is developing and verifying new template algorithms that cover several variants of lock-free skiplists and lock-based log-structured merge (LSM) trees. The template algorithms capture concurrency mechanism, but abstract away node-level details and the maintenance operations.
The generalizable contribution of the thesis is the advancement in the verification technology required to prove the new template algorithms. There are two key contributions here, first relating to hindsight reasoning and second to keyset reasoning. Hindsight reasoning has been shown to be useful for proving linearizability, but it has not been explored in the context of a foundational program logic. The thesis addresses the challenge by embedding the technique of hindsight reasoning in the concurrent separation logic Iris via prophecy variables. Keyset reasoning is useful for lifting assertions on a node's contents to the global contents held by the structure. The thesis develops a keyset resource algebra, an Iris resource algebra to enable keyset reasoning in Iris.
All of the techniques and proofs are mechanized in Iris/Coq. Verified search structures include in particular the Michael set, the Harris list, the Herlihy-Shavit skiplist and an LSM-tree implementation based on LevelDB. The verification effort represents a significant contribution as it is the first mechanized proof of linearizability for concurrent skiplists and LSM-trees. -
Ph.D. Thesis
2024
Neural Language Representations and Scaling Semi-Supervised Learning for Speech Recognition
Peyser, Cal
Abstract
|
PDF
Title: Neural Language Representations and Scaling Semi-Supervised Learning for Speech Recognition
Candidate: Peyser, Cal
Advisor(s): Prof. Kyunghyun Cho, Prof. Michael Picheny
Abstract:
Speech recognition research has been focused for several years on the incorporation of unpaired speech and text data alongside conventional supervised datasets. Dominant methods have emphasized auxiliary tasks for refining speech and/or text representations during model training. These methods have generally performed strongly when paired with very small supervised datasets, but do not yield the same improvements against strong, supervised baselines. We argue in this thesis that the path to scaling these methods lies in the speech and text representations themselves. We investigate statistical properties of these representations, and show that downstream ASR performance corresponds to a model's ability to jointly represent speech and text. We analyze existing methods for semisupervised ASR, and develop an algorithm to improve them at scale by aligning speech and text in representation space.
-
Ph.D. Thesis
2024
Unlocking AI outside the training distribution: Generalization, Causality, and Coronary Risk Modeling
Puli, Aahlad Manas
Abstract
|
PDF
Title: Unlocking AI outside the training distribution: Generalization, Causality, and Coronary Risk Modeling
Candidate: Puli, Aahlad Manas
Advisor(s): Prof. Rajesh Ranganath
Abstract:
Modern AI models make it easy to exploit the correlations in a dataset to predict a target of interest from a given set of inputs. However, the primary use of these models often lies outside the training data. For example, while one can train a Transformer to correlate a patient's medical history to their chances of developing coronary heart disease (CHD), the goal would be to estimate risks on populations elsewhere or in the future. Challenges arise if the model relies on correlations that shift between training and test times or capture non-causal relationships. Predictions based on unstable relationships can degrade outside the training distribution, and basing treatment decisions on non-causal relationships can result in harm. This thesis first develops a methodology for generalizing out-of-distribution (OOD) and estimating causal effects. It closes with an empirical study of building and transporting CHD risk models at two large hospital systems.
The first part begins by defining a class of distribution shifts where standard training or balancing the data yield models can perform worse than random guessing. We characterize representations that generalize across such shifts and derive an algorithm to build models with such representations. Next, we develop an approach to encode knowledge of features used by humans into building robust models. The last work in this part identifies biases implicit in the standard way of training, gradient-based optimization of cross-entropy, that force models to depend more on unstable features than on the more informative stable ones. We develop a class of loss functions to encourage dependence on the more informative features.
The second part of this thesis studies cases where common assumptions that enable causal estimation are violated. We provide an algorithm to estimate causal effects with deep models from confounded data where instrumental variables are available. This algorithm generalizes the control function method and works without the separability assumptions required by popular algorithms like the two-stage least-squares and generalized method of moments. Then, we consider tasks where the confounders are known to equal a function of the variables whose effects we want to estimate; this setup violates an assumption known as overlap or positivity, commonly made to uniquely determine (identify) causal effects from non-randomized data. In this setting, we derive nonparametric conditions for identifiability and derive an estimator that solves a gradient flow equation to answer general causal queries from the data without overlap.
The last part of this thesis performs an empirical study of building and transporting CHD risk models between two large hospitals. Departing from the standard approach of constructing risk scores from carefully chosen features, we use broad feature sets available in the electronic health records (EHRs). We train AI models to predict time-to-CHD from minimally curated EHR data that outperform existing risk scores at the institution where they were trained and when transported externally. -
Ph.D. Thesis
2024
DrawTalking: Building Interactive Worlds by Sketching and Speaking
Rosenberg, Karl Toby
Abstract
|
PDF
Title: DrawTalking: Building Interactive Worlds by Sketching and Speaking
Candidate: Rosenberg, Karl Toby
Advisor(s): Ken Perlin
Abstract:
This thesis introduces the design and implementation of an interaction concept called DrawTalking. Through simple combinations of sketching and speaking, the user can improvisationally build an interactive world of graphics, animations, diagrams, and dynamic mechanisms with behavior and rules, as if by narrating a story or explaining a concept to an audience. The interface demonstrates a possible step towards designing future interfaces more closely in-tune with how we naturally communicate and think.
For context, sketching while speaking has played a major part in innovation across disciplines. The combination of visuals and spoken language enables us to make-believe: think about, describe, communicate, and interact with anything that we can think of, including things that do not or cannot exist in the real world. Evolving technology creates opportunities to move beyond sketching and speech alone. Human-computer interactions of the future, drawing inspiration from our process of make-believe, can add interactive computation to the combination of sketching and speech, allowing us to work with explorable worlds, simulations, and mechanics. By enabling such interactions, we might think, learn, design, play, and tell stories in increasingly expressive ways.
Towards this idea, what makes for a good interface for computation-mediated sketching and speaking? This touches upon several fundamental questions in interaction design, human-AI interaction, and human-centered interfaces, chiefly among them, how to balance human control and machine automation?
Inspired by real-world speaking and sketching interactions, and seminal works in dynamic sketching, interactive visual programming, and language interfaces, we designed interaction techniques that draw on the way people describe objects and phenomena when telling stories and explaining processes at a whiteboard.
How does it work? the user speaks to label hand-drawn sketches with names and properties, and to define rules for how their world should behave. This communicates semantic intent to the computer, while giving the user the flexibility to choose how to represent and change their drawings. Now the user can interact with a simulated world simply by narrating stories or describing mechanics, which dynamically creates running interactive programs from built-in primitives and user-customized rules.
To gauge understanding of the mechanics of DrawTalking and to derive use cases, we invited participants to an open-ended one-on-one user-study session with the researcher to discover and explore the features in DrawTalking. Each user improvised and prototyped interactive sketch-based animations and gameplay scenarios by collaborating with the researcher. The resulting artifacts and discussion were oriented around each participant's specific experiences and background.
Feedback suggests that our approach is promising and intuitive: it prioritizes user control; it is flexible and supports improvisation; the workflow is fluid; the features are extensible and adaptable to other application domains and contexts beyond sketching; the design demonstrates how multiple applications can use similar language-based interaction techniques and behaviors predictably alongside other language-based technologies; it enables programming-like capability without code.
Through the research and design process of DrawTalking, we learned that it could represent an approach to designing complex interoperating systems for human-AI collaboration. We hope it can serve as a useful example for research and design of future machine-mediated interfaces, interactions, and computer systems. -
Ph.D. Thesis
2024
Algorithmic enhancements to causal inference problems
Shen, Bingran
Abstract
|
PDF
Title: Algorithmic enhancements to causal inference problems
Candidate: Shen, Bingran
Advisor(s): Prof. Dennis Shasha
Abstract:
This thesis explores novel approaches to inferring and representing causal relationships in biological networks. We introduce EnsInfer, an ensemble method that combines state-of-the-art inference algorithms using a Naive Bayes classifier, outperforming individual methods and providing a flexible framework for integrating diverse data types. Our research then challenges the conventional representation of gene regulatory networks (GRNs) by demonstrating that nonlinear machine learning models achieve better predictive performance than models based solely on "gold standard" regulatory edges. To address this limitation, we propose a bipartite network representation that better captures the synergistic regulatory effects of multiple transcription factors on target genes. This framework focuses on four key goals: predictive accuracy, parsimonious enumeration of predictive regulatory genes, identification of disjoint sets of predictive regulatory genes, and construction of a bipartite network representation of causality. Our work provides an actionable and interpretable paradigm for investigating causal gene regulation, with potential applications across diverse domains of causality research.
-
Ph.D. Thesis
2024
Olympiad-level Geometry Theorem Proving without Human Demonstrations
Trinh, Trieu
Abstract
|
PDF
Title: Olympiad-level Geometry Theorem Proving without Human Demonstrations
Candidate: Trinh, Trieu
Advisor(s): He He
Abstract:
Proving mathematical theorems at Olympiad level represents a significant milestone in human-level automated reasoning, owing to their reputed difficulty among the world’s best talents in pre-university mathematics. Current machine learning approaches, however, are not applicable to most mathematical domains due to the high cost of translating human proofs into machine-verifiable format. The problem is even worse for geometry due to its unique translation challenges, resulting in severe scarcity of training data. We propose G0, a theorem prover for Euclidean plane geometry that sidesteps the need for human demonstrations by synthesizing millions of theorems and proofs across different levels of complexity. G0 is a neuro-symbolic system that uses a neural language model, trained from scratch on our large-scale synthetic data, to guide a symbolic deduction engine through infinite branching points in challenging problems. On a test set of 30 latest Olympiad problems, G0 solves 25, outperforming the previous best method that only solves 10 problems and approaching the performance of an average International Mathematical Olympiad (IMO) gold medalist. Notably, G0 produces human-readable proofs, solves all geometry problems in the IMO 2000 and 2015 under human expert evaluation, and discovers a generalized version of a translated IMO theorem in 2004.
-
Ph.D. Thesis
2024
Improve Language Model Serving Efficiency with Fine-grained and Stateful Scheduling
Yu, Lingfan
Abstract
|
PDF
Title: Improve Language Model Serving Efficiency with Fine-grained and Stateful Scheduling
Candidate: Yu, Lingfan
Advisor(s): Jinyang Li
Abstract:
The world has witnessed the remarkable success of large language models (LLMs), led by the fast-growing popularity of ChatGPT. However, it is challenging to serve these language models and deliver both high throughput and low latency due to the iterative nature of language models. This thesis identifies two key issues impacting the performance of existing systems: (1) Coarse-grained batching at the request level results in wasteful computation for requests with variable input and output lengths; (2) The lack of stateful context management results in duplicate computation for applications that engage in multi-turn interactions with the LLM model. Two systems, BatchMaker and Pensieve, are then presented to address these issues.
BatchMaker proposes a technique called cellular batching to improve the latency and throughput of language model inference. Existing systems use batch execution of the dataflow graphs of a fixed set of requests. By contrast, BatchMaker makes finer-grained batching decisions at each token processing step, and dynamically assembles a batch for execution as requests join and leave the system.
Pensieve is a system optimized for multi-turn conversation LLM serving. It maintains the conversation state across requests from the same conversation by caching previously processed history to avoid duplicate processing. Pensieve's multi-tier caching strategy utilizes both GPU and CPU memory to store and retrieve cached data efficiently. Pensieve also generalizes the recent PagedAttention kernel to support attention between multiple input tokens whose KV cache is spread over non-contiguous GPU memory.
Experiments on various workloads show that BatchMaker improves throughput by 25-80% while reducing latency by 18-90% latency, and Pensieve improves throughput by 33-100% and reduces latency by 40-77%. -
Ph.D. Thesis
2024
Theory of Symmetric Neural Networks
Zweig, Aaron
Abstract
|
PDF
Title: Theory of Symmetric Neural Networks
Candidate: Zweig, Aaron
Advisor(s): Joan Bruna
Abstract:
Symmetric functions, which take as input an unordered, fixed-size set, find practical application in myriad physical settings based on indistinguishable points or particles, and are also used as intermediate building blocks to construct networks with other invariances. Symmetric functions are known to be universally representable by neural networks that enforce permutation invariance. However the theoretical tools that characterize the approximation, optimization and generalization of typical networks fail to adequately characterize architectures that enforce invariance.
This thesis explores when these tools can be adapted to symmetric architectures, and when the invariance properties lead to new theoretical findings altogether. We study and prove approximation limitations on the extension of symmetric neural networks to infinite-sized inputs, the approximation capabilities of symmetric and antisymmetric networks relative to the interaction between set elements, and the learnability of simple symmetric functions with gradient methods -
Ph.D. Thesis
2023
On Matching Problems in Large Settings
Agarwal, Ishan
Abstract
|
PDF
Title: On Matching Problems in Large Settings
Candidate: Agarwal, Ishan
Advisor(s): Richard Cole
Abstract:
Matching problems arise in several settings in practice and have been a longstanding subject of theoretical analysis. Typically, the settings of interest involve a large number of agents. We further the study of matching problems in two settings: the stable matching setting, which has been studied since the seminal work of Gale and Shapley, and a setting where agents' values to prospective partners degrade over time, leading them to have to balance the trade-off between searching for a better partner versus deciding to match.
In the stable matching setting, we extend a line of research that seeks to explain the dichotomy between the fact that Gale and Shapley's Deferred Acceptance algorithm seems to work well in practice, even when agents only submit a short list of prospective partners to the centralized matching algorithm, and the fact that if the agents' preferences are allowed to be arbitrary, complete lists of all agents' preferences are needed in order to guarantee a stable matching. To this end, we consider probabilistically generated preference lists and we show that under fairly general assumptions and in a variety of models, with high probability, short lists of prospective partners, namely length $\Theta (\log n)$ instead of $n$, suffice for most of the agents. We prove our bounds are tight up to constant factors. Furthermore, we construct a simple set of $\Theta (\log n)$ possible matches per agent for almost all agents and demonstrate (in the form of an approximate equilibrium result) that they can afford to restrict their proposals to this set, while incurring only a small loss in utility.
In the time discounted utilities setting, we consider a dynamic matching market, and study how agents should balance accepting a proposed match with the cost of continuing their search. Our model has two new features: finite agent lifetimes with linear loss in utility over time, and a discrete population model, aspects which are underexplored in the literature. We quantify how well the agents can do by providing upper and lower bounds on the collective losses of the agents, with a polynomially small failure probability, where the notion of loss is with respect to a plausible baseline we define. These bounds are also tight up to constant factors.
In both settings, we complement our theoretical results with numerical simulations.
-
Ph.D. Thesis
2023
Function Space Reasoning in Gaussian Processes and Neural Networks
Benton, Gregory
Abstract
|
PDF
Title: Function Space Reasoning in Gaussian Processes and Neural Networks
Candidate: Benton, Gregory
Advisor(s): Andrew Gordon Wilson
Abstract:
In a typical modeling setting we have prior notions of what types of functions we want to learn. For example, in regression we may want to learn a smooth function or a periodic function and in image classification we may want to learn a function that is invariant to rotations. While function space provides us the benefit of being able to reason about traits like invariance or smoothness, it is often difficult to directly quantify the functional properties of models, in particular for large parametric models like neural networks.
In this thesis we leverage our ability to reason about function space to build more powerful models in both Gaussian processes (GPs) and neural networks. By generating GP kernels as functions themselves of latent processes, we introduce methods for providing uncertainty over what types of functions we produce, not just over the functions themselves in GP models. We also introduce methods for learning levels of invariance and equivariance in neural networks, enabling us to imbue the functions our models produce with soft or limited equivariance constraints. Finally, we show how we can leverage our understanding of parameter space in neural networks to efficiently ensemble diverse collections of functions to improve the accuracy and robustness of our models. Through the introduction of these methods we show that by carefully considering the types of functions we are producing we can describe models with a range of desirable properties. These properties include more flexible models, models that better align with domain knowledge, and models that are both accurate and robust. We demonstrate these results on a broad range of problems, including time series forecasting, image classification, and reinforcement learning.
-
Ph.D. Thesis
2023
Bridging the Gap from Supervised Learning to Control
Brandfonbrener, David
Abstract
|
PDF
Title: Bridging the Gap from Supervised Learning to Control
Candidate: Brandfonbrener, David
Advisor(s): Joan Bruna
Abstract:
he combination of deep learning and internet-scale data with supervised
learning has led to impressive progress in recent years. However, the
potential of this progress has yet to be realized in the context of
control problems beyond games that are easy to simulate. This thesis
attempts to bridge this gap so as to leverage tools from supervised
learning to solve control problems. To do this, we focus on the offline
reinforcement learning setting which attempts to learn a control policy
from a fixed dataset rather than requiring the policy to learn and
collect data at the same time. This removes issues of non-stationary
training data and exploration from the control problem, which allows the
more straightforward application of tools from supervised learning.
We study this intersection between supervised learning and control from
several angles. In the first part of the thesis, we present work on
policy learning, focusing on simplified algorithms that look more like
standard supervised algorithms. In the second part, we move one step
earlier in the pipeline and consider how to best collect datasets for
offline reinforcement learning. And in the last part, we consider how to
design pretraining objectives to learn representations for downstream
offline policy learning. Taken together, these contributions present a
view of the promise and challenges that face the application of machine
learning to control problems.
-
M.S. Thesis
2023
On Certified Isotopic Approximation of Space Curves
Dogan, Caglar
Abstract
|
PDF
Title: On Certified Isotopic Approximation of Space Curves
Candidate: Dogan, Caglar
Advisor(s): Chee Yap
Abstract:
The approximation of implicitly defined curves or surfaces is a problem of interest for many fields. As a result, this problem has been explored using algebraic, geometric, and numerical methods. Amongst these, a numerical method called Marching Cubes Algorithm ([4]) has been the primary choice in implementations because of its efficiency and implementability, even though a guarantee for topological correctness was not generally present.
Research in this area has largely focused on approximations of 𝑛 − 1 dimensional manifolds in 𝑛 dimensional Euclidean space. These are called co-dimension 1 manifolds, defined as the zero sets of single equations in 𝑛 variables. Plantinga and Vegter (2004) [8] derived the first algorithms with guaranteed topological correctness using interval arithmetic and adaptive subdivision for 𝑛 = 2, 3. Faster variants of such algorithms were described by Yap et al. (2009, 2014) [10] [11]. Galehouse (2008) [9] succeeded in producing such algorithms for all 𝑛.
This thesis addresses the problem of computing isotopic approximations of co-dimension 2 manifolds, i.e., 𝑛 − 2 dimensional manifolds in 𝑛 dimensional Euclidean space. Such manifolds are the intersection of the zero sets of two equations in 𝑛 variables. The first interesting case is 𝑛 = 3, i.e., the problem of computing an isotopic approximation of a space curve in 3D. We work on devising new algorithms by extending the previous interval techniques in co-dimension 1. Moreover, we implement and visualize such curves in order to verify their practical efficiency.
-
Ph.D. Thesis
2023
Provably Robust and Accurate Methods for Rigid and Deformable Simulation with Contact
Ferguson, Zachary
Abstract
|
PDF
Title: Provably Robust and Accurate Methods for Rigid and Deformable Simulation with Contact
Candidate: Ferguson, Zachary
Advisor(s): Daniele Panozzo
Abstract:
Contacts are essential to virtually every aspect of life and play a vital role in many physical phenomena. Because of this, the study of contact mechanics has a deep wealth of knowledge. Surprisingly, however, simulating contact is a challenge with many parameters to carefully adjust. Incorrect parameters can result in numerical explosions, intersections, and other failures. Our research seeks to address these problems by developing robust methods that can handle arbitrary scenarios with guaranteed success.
In this thesis, we introduce the Incremental Potential Contact (IPC) method. IPC is the first simulation algorithm for deformable and rigid bodies that is unconditionally robust, requires minimal parameter tuning, and provides a direct way of controlling the trade-off between running time and accuracy. We further back up these claims by providing a large-scale benchmark of continuous collision detection (CCD) algorithms (a core component of the IPC method) based on their efficiency and correctness. As part of this study, we introduce the first efficient CCD algorithm that is provably conservative. For extended accuracy and efficiency, we show how nonlinear geometry and function spaces can be used within the IPC framework. Finally, we introduce the first physically-based adaptive meshing strategy which produces more accurate discretizations depending on elastic, contact, and frictional forces.
This work and our open-source implementations have quickly garnered attention from the computer graphics, mechanical engineering, and biomechanical engineering communities for their robustness and ability to seamlessly handle scenarios that have long been a challenge. This marks a large step towards democratizing simulation tools for design, robotics, biomechanical, and visual effects applications, among others.
-
Ph.D. Thesis
2023
Understanding and Incorporating Mathematical Inductive Biases in Neural Networks
Finzi, Marc
Abstract
|
PDF
Title: Understanding and Incorporating Mathematical Inductive Biases in Neural Networks
Candidate: Finzi, Marc
Advisor(s): Andrew Gordon Wilson
Abstract:
To overcome the enormous sample complexity of deep learning models, we can leverage basic elements of human and scientific knowledge and imbue these elements into our models. By doing so, we can short-circuit the thousands of years of evolutionary development that has enabled such rapid learning in humans, and the development of science which provides a framework to fit new knowledge into. In this work I develop new methods for incorporating mathematical inductive biases into our models, biasing them towards solutions that reflect our priors and our knowledge. This work helps to broaden the scope and automation of equivariant model construction across diverse domains, uncover the role of inductive biases in learning and generalization, and developing new machine learning models for scientific applications, capturing relevant scientific knowledge.
-
Ph.D. Thesis
2023
Deconstructing Models and Methods in Deep Learning
Izmailov, Pavel
Abstract
|
PDF
Title: Deconstructing Models and Methods in Deep Learning
Candidate: Izmailov, Pavel
Advisor(s): Andrew Gordon Wilson
Abstract:
Machine learning models are ultimately used to make
decisions in the real world, where mistakes can be incredibly costly.
We still understand surprisingly little about neural networks and the
procedures that we use to train them, and, as a result, our models are
brittle, often rely on spurious features, and generalize poorly under
minor distribution shifts. Moreover, these models are often unable to
faithfully represent uncertainty in their predictions, further
limiting their applicability. In this dissertation, I present results
on neural network loss surfaces, probabilistic deep learning,
uncertainty estimation and robustness to distribution shifts. In each
of these works, we aim to build foundational understanding of models,
training procedures, and their limitations, and then use this
understanding to develop practically impactful, interpretable, robust
and broadly applicable methods and models. -
Ph.D. Thesis
2023
Learning structured and stable reduced models from data with operator inference
Sawant, Nihar
Abstract
|
PDF
Title: Learning structured and stable reduced models from data with operator inference
Candidate: Sawant, Nihar
Advisor(s): Benjamin Peherstorfer
Abstract:
Operator inference learns low-dimensional dynamical-system models with polynomial nonlinear terms from trajectories of high-dimensional physical systems (non-intrusive model
reduction). This work focuses on the large class of physical systems that can be well described by models with quadratic and cubic nonlinear terms and proposes a regularizer for
operator inference that induces a stability bias onto learned models. The proposed regularizer is physics informed in the sense that it penalizes higher-order terms with large norms and
so explicitly leverages the polynomial model form that is given by the underlying physics.
This means that the proposed approach judiciously learns from data and physical insights
combined, rather than from either data or physics alone. A formulation of operator inference
is proposed that enforces model constraints for preserving structure such as symmetry and
definiteness in linear terms. Additionally, for a system of nonlinear conservation laws, we
enforce model constraints that preserve the entropy stability of the dynamical system. Numerical results demonstrate that models learned with operator inference and the proposed
regularizer and structure preservation are accurate and stable even in cases where using no
regularization and Tikhonov regularization leads to models that are unstable. -
Ph.D. Thesis
2023
Continuous LWE and its Applications
Song, Min Jae
Abstract
|
PDF
Title: Continuous LWE and its Applications
Candidate: Song, Min Jae
Advisor(s): Oded Regev/Joan Bruna
Abstract:
Efficiently extracting useful information from high-dimensional data is a major challenge in machine learning (ML). Oftentimes, the challenge comes not from a lack of data, but from its high dimensionality and computational constraints. For instance, when data exhibits a low-dimensional structure, one could in principle exhaustively search over all candidate structures, and obtain estimators with strong statistical guarantees. Of course, such brute-force approach is prohibitively expensive in high dimensions, necessitating the need for computationally efficient alternatives. When our problem, however, *persistently* eludes efficient algorithms, we may find ourselves asking the following perplexing question: is the failure due to our lack of algorithmic ingenuity or is the problem just too hard? Is there a *gap* between what we can achieve statistically and what we can achieve computationally?
This thesis is one attempt at answering such questions on the computational complexity of statistical inference. We provide results of both positive and negative nature on the complexity of canonical learning problems by establishing connections between ML and lattice-based cryptography. The Continuous Learning with Errors (CLWE) problem, which can be seen as a continuous variant of the well-known Learning with Errors (LWE) problem from lattice-based cryptography, lies at the center of this fruitful connection.
In the first part of this thesis, we show that CLWE enjoys essentially the same average-case hardness guarantees as LWE. This result has several important applications. For example, it shows that estimating the density of high-dimensional Gaussian mixtures is computationally hard, and gives rise to "backdoored" Gaussian distributions that can be used to plant undetectable backdoors in ML models and construct novel public-key encryption schemes.
Next, we focus on the "backdoored" Gaussian distributions, which we refer to as Gaussian Pancakes, and the problem of distinguishing these distributions from the standard Gaussian. We provide several evidence for the hardness of this distinguishing problem based on a reduction from CLWE and lower bounds against restricted classes of algorithms, such as algorithms that compute low-degree polynomials of the observations.
Finally, we end on a positive note by showing that the Lenstra-Lenstra-Lovasz (LLL) algorithm, commonly used in computational number theory and lattice-based cryptography, has surprising implications for noiseless inference. In particular, we show that LLL solves both CLWE and Gaussian Pancakes in the noiseless setting, in spite of the low-degree lower bound for Gaussian Pancakes. Furthermore, we show that LLL surpasses Sum-of-Squares and Approximate Message Passing algorithms, two methods often conjectured to be optimal among polynomial-time algorithms, on other noiseless problems such as Gaussian Clustering and Gaussian Phase Retrieval. These results highlight the crucial but subtle role of noise and hidden algebraic structure in the onset of statistical-to-computational gaps. -
Ph.D. Thesis
2023
Expanding Structural Design through Shape Optimization and Microstructures
Tozoni, Davi Colli
Abstract
|
PDF
Title: Expanding Structural Design through Shape Optimization and Microstructures
Candidate: Tozoni, Davi Colli
Advisor(s): Denis Zorin
Abstract:
3D printing and other modern manufacturing tools allow users to design and produce customized objects for their needs at a considerably low cost. However, designing structures that are able to perform well is not an easy task and doing it manually can be a very slow and tedious process. In this context, structural optimization techniques can be very useful and help automating the design and analysis process.
This thesis describes techniques that can expand the usage of structural optimization for digital fabrication by formulating optimization to be used with simulation models that are closer to reality, through the addition of contact and friction. Moreover, we show a fast method to compute gradients from differentiable simulations, which can be used to optimize shape, material and physical properties of our domain. In addition, we provide ways of expanding the use of two-scale topology optimization by presenting microstructures that have a smooth map from material to geometry and which can be used on curved shapes defined by irregular lattices with close to rhombic cells. Finally, we introduce two low-parametric microstructures that together are able to cover almost the whole possible range of elastic properties for isotropic metamaterials.
Our results in simulation and physical experiments, both for static and time-dependent scenarios, show the advantages of our techniques and how they can be used in practice. -
Ph.D. Thesis
2022
Enhancing Robustness through Domain Faithful Deep Learning Systems
Balashankar, Ananth
Abstract
|
PDF
Title: Enhancing Robustness through Domain Faithful Deep Learning Systems
Candidate: Balashankar, Ananth
Advisor(s): Lakshminarayanan Subramanian
Abstract:
In high-stakes domains like health, socio-economic inference, and content moderation, a fundamental roadblock for relying on deep learning systems is that models' predictions diverge from established domain knowledge when deployed in the real world and fail to faithfully incorporate domain-specific structure. In this talk, I will focus on the design of Domain Faithful Deep Learning Systems, that translate expert-understandable domain knowledge and constraints to be faithfully incorporated into learning robust deep learning models. Through methodological contributions in causal-aware ML model design, constrained optimization, counterfactual data augmentation, and feature selection, I have addressed core research questions of “What data distributions do domain practitioners care about?'', “How to faithfully convert domain knowledge into model constraints for better generalization?'' and finally ``How to evaluate whether the ML models we learn are grounded in the domain knowledge and in what ways do they deviate?''. I will demonstrate how, through these new approaches to incorporating domain knowledge, I have been able to meaningfully improve performance in four real-world applications of news-based famine forecasting, medication recommendations, causal question answering, and toxicity detection in online social media. These causal-aware and robust prediction models I have developed in collaboration with the World Bank and Google have shown that incorporating domain-specific structure is essential for building robust predictive models.
-
M.S. Thesis
2022
Symbolic Execution of GRASShopper Programs
Cox, Eric
Abstract
|
PDF
Title: Symbolic Execution of GRASShopper Programs
Candidate: Cox, Eric
Advisor(s): Thomas Wies
Abstract:
Symbolic execution is a more efficient and viable alternative to implementing deductive verification tools to fully automate the formal verification of programs. Symbolic execution in many cases can provide performance gains over verification condition generation (VCG) based verification tools due to the fact that symbolic execution directly manipulates in-memory data structures.
This thesis presents the design and implementation of a symbolic execution engine for the GRASSHopper programming language which already supports verification using VCG. The goal of this work was to adapt ideas from the symbolic execution engine from the Viper verification infrastructure language to the semantics of GRASSHopper and to demonstrate its utility on sample programs. We present a rigorous description of the operational semantics of the symbolic interpreter, discuss implementation details and illustrate the symbolic execution behavior on a set of sample programs.
In order to explore interesting details around implementing a symbolic execution backend for GRASSHopper, this work introduced a method to support encoding of snapshots at the struct field level using injective functions. In addition, several language extensions were added to the GRASSHopper user-facing language and the intermediate representation. A few of these extensions will now allow support for finer-grained permissions for individual fields rather than granting permissions to all fields of structures, unfolding and folding of recursive predicates, and support for if-then-else expressions in predicate and heap-dependent functions.
-
M.S. Thesis
2022
Program Unrolling by Abstract Interpretation for Probabilistic Proofs
Feldan, Daniel
Abstract
|
PDF
Title: Program Unrolling by Abstract Interpretation for Probabilistic Proofs
Candidate: Feldan, Daniel
Advisor(s): Patrick Cousot
Abstract:
Zero-knowledge proofs are cryptographic protocols that enable one party to prove the validity of a statement to another party while revealing no additional information beyond the statement’s truth. These protocols have a myriad of applications, especially within the realm of cloud computing. Zero-knowledge protocols can be used to probabilistically verify cloud-computed program by first converting an input program into a boolean circuit, then using this circuit in a zero-knowledge proof system to show the correctness of the computed output. This work focuses on enhancing zero-knowledge proofs for practical implementation, as many of the current protocols are currently very computationally expensive.
The primary contribution of this thesis is a formalization of a program transformation technique that combines abstract interpretation and program unrolling to analyze, transform and optimize a program before transforming it into a boolean circuit. By analyzing a program with an abstract interpreter while simultaneously unrolling it, we achieve a significantly more precise static analysis without the need for traditional widening and narrowing operations. This approach enables aggressive optimization of the unrolled program, reducing both the cost of transforming the program into a circuit, and the resulting circuit’s size -
Ph.D. Thesis
2022
Unstructured Mesh Generation and Repairing in the Wild
Hu, Yixin
Abstract
|
PDF
Title: Unstructured Mesh Generation and Repairing in the Wild
Candidate: Hu, Yixin
Advisor(s): Daniele Panozzo
Abstract:
A mesh is a representation used to digitally represent the boundary or volume of an object for manipulation and analysis. Meshes can be used in many fields, including physical simulation in manufacturing, architecture design, medical scan analysis. In this thesis, we propose a series of meshing algorithms, named WildMeshing, that tackles one of the long-standing, yet fundamental, problems in geometry modeling: robustly and automatically generating high-quality triangle and tetrahedral meshes and repairing imperfect geometries in the wild. Different from existing methods that have assumptions about the input and thus often fail on real-world input geometries, WildMeshing provides strict guarantees of termination and is a black box that can be easily integrated into any geometry processing pipelines in research or industry.
This thesis first investigates the problem of tetrahedralizing 3D geometries represented by piecewise linear surfaces. We propose an algorithm, TetWild, that is unconditionally robust, requires no user interaction, and can directly convert a triangle soup into an analysis-ready volumetric tetrahedral mesh. It relies on three core principles: hybrid geometric kernel, tolerance of the mesh relative to the surface input, and iterative mesh optimization with guarantees on the output validity. We then consider improving the algorithm efficiency for tetrahedralizing large-scale geometries. We design a new algorithm, fTetWild, that is based on the principles of TetWild but replaces the hybrid kernel with a floating-point kernel, which largely reduces runtime while keeping the same robustness. Next, this thesis explores meshing curved geometries. We start from the problem of triangulating 2D planar shapes whose boundaries are represented by curves. We introduce TriWild, an algorithm to robustly generate curved triangle meshes reproducing smooth feature curves, which leads to coarse meshes designed to match the simulation requirements necessary by applications and avoids the geometrical errors introduced by linear meshes.
We test our algorithms on over ten thousand real-world input geometries and they achieve 100% success rate. Our methods generate meshes without any assumptions about the input while repairing the imperfect geometries, opening the door to automatic, large-scale processing of real-world geometric data.
-
Ph.D. Thesis
2022
Data-driven Solutions for Addressing Two Pressing Urban Sustainability Challenges: Air Pollution Reduction and Traffic Management
Iyer, Shiva
Abstract
|
PDF
Title: Data-driven Solutions for Addressing Two Pressing Urban Sustainability Challenges: Air Pollution Reduction and Traffic Management
Candidate: Iyer, Shiva
Advisor(s): Lakshmi Subramanian
Abstract:
Data Science and AI-driven solutions are abundant today for a large variety of practical applications. With a continuing focus on urban development and sustainability, in this thesis, I present our attempts in addressing two prominent urban challenges – urban air pollution control and road traffic congestion management. For both these applications, we have developed novel methods, such as the message-passing recurrent neural network, for predictive analytics and inference. The city of Delhi has 32 air quality monitors over an area of about 900 sq km, but we do not have information on fine-grained variations in air quality in the city in order to reason about citizen exposure and identify hotspots. We have installed 28 low-cost sensors, many of them concentrated in the south Delhi region. We have developed a generic definition of "hotspots" in terms of spatio-temporal variations, using which we validate some known hotspots and discover new ones. We have also designed a novel model combining geostatistics and deep learning that is able to make spatio-temporal pollution predictions by the hour with an MAPE of about 10% across all locations.
In the context of urban traffic management, we first show that road networks can experience traffic jams over prolonged periods such as several hours due to sudden traffic bursts over short time scales. We illustrate this using real data from two different cities – New York and Nairobi. We provide a formalism for understanding the phenomena of traffic collapse and sudden jams. In the second work, we devise a novel model called the message-passing neural network for modeling the propagation of congestion within a road network and forecasting congestion. The MPRNN achieves the lowest mean error of < 0.3 mph when predicting ahead in 10 minute intervals, for up to 3 road segments ahead (message passing across 3 hops). Finally, in the third work, we describe an algorithm for signal control in free-flow road networks, inspired from congestion control in computer networks. Our proposed method significantly enhances the operational capacity of free-flow road networks in the real world by several orders of magnitude (between 3× and 5×) and prevents congestion collapse.
-
Ph.D. Thesis
2022
Synergistic Geometry Processing: from Robust Geometric Modeling to Scalable Physical Simulation
Jiang, Zhongshi
Abstract
|
PDF
Title: Synergistic Geometry Processing: from Robust Geometric Modeling to Scalable Physical Simulation
Candidate: Jiang, Zhongshi
Advisor(s): Daniele Panozzo
Abstract:
Various applications, from artistic creation, to scientific computing,
require the processing and reasoning of 3D digital objects.
The computational modeling of 3D geometric shapes, materials, and
textures, as well as the simulation of their deformation and
interactions, is essential to bring the algorithmic power of computing
to real-life manufacture, architecture, and medical device design.
Depending on the specific numerical properties, better algorithm
designs might prefer 3D data with different representations, for
example, in planes, surfaces, or inside volumes.This thesis investigates the problem related to the representations of
data on 3D shapes and across different domains,
so computations for different stages within a pipeline, may come
together synergistically without manual tuning that disrupts an
automated data flow.
I propose novel geometrical principles in various geometric modeling
and processing stages. I also showcase various geometric computing
applications that easily integrate such principles to guarantee the
geometry validity and algorithm effectiveness of surface
parameterization, rendering, deformation/animation, and mechanical
simulation.
In addition, we can finally explore creative solutions that reliably
coarsen the surface. Such simplification accelerates everyday
geometric modeling operations; the contribution also includes a
scalable method to construct coarse and curved meshes for fast
animation and scientific computing.
Furthermore, the thesis provides a declarative way to formulate mesh
processing and adaptation algorithms to facilitate the practical
development of robust and reliable mesh processing software.
Finally, the thesis includes extensive numerical validations involving
tens of thousands of complex geometry shapes. And to maintain
replicability and foster further research in this direction, I also
released the implementation and generated data to be open source and
accessible.
Finally, the thesis includes extensive numerical validations involving
tens of thousands of complex geometry shapes. %And to maintain
replicability and foster further research in this direction, I also
released the implementation and generated data to be open source and
accessible.
-
Ph.D. Thesis
2022
Cryptography: From Practice to Theory
Karthikeyan, Harish
Abstract
|
PDF
Title: Cryptography: From Practice to Theory
Candidate: Karthikeyan, Harish
Advisor(s): Yevgeniy Dodis
Abstract:
This work is yet another attempt to turn an age-old adage on its head by deriving inspiration for theoretical research from problems that are germane to practitioners and real-world deployment. This could be viewed as a departure from the practice of creating real-world solutions that trace their origin to a theoretical research, or alternatively ex post facto theoretical analyses of practically deployed solutions that can be rather ad-hoc. Specifically, we look at four different problems that are relevant for practical deployment - random number generation, provably secure block ciphers, searching over encrypted data, and forward-secure group messaging.
-
Ph.D. Thesis
2022
Scalable Distributed Payment Systems with Minimal Trust Assumptions
Kattis, Assimakis
Abstract
|
PDF
Title: Scalable Distributed Payment Systems with Minimal Trust Assumptions
Candidate: Kattis, Assimakis
Advisor(s): Prof. Joseph Bonneau
Abstract:
Over the last decade, the security and resilience of Bitcoin
as a stable payment network has motivated substantial study of the
viability of distributed payment protocols, with many works focusing on their suitability as alternatives to centralized payment processing. We investigate the design of scalable distributed payment systems in the permissionless setting, where no actors in the protocol can be trusted or identified with out-of-band information. Scalability is identified with two desirable properties: high transaction processing rate (or throughput) and low confirmation latency (or settlement times). We analyze the trade-offs inherent to distributed protocols that prevent
naive optimization of the above parameters and study techniques from verifiable computation as potential tools for overcoming these
bottlenecks.
One technique to increase throughput in distributed payment systems
involves the use of Succinct Non-interactive ARguments of Knowledge
(SNARKs, or SNARK proofs) to verify the integrity of transactions.
Transaction rollups are one such solution, using SNARK computations to achieve scalability. Many instantiations of rollups leveraging SNARKs show encouraging evidence that this technique could achieve commercial- capacity throughput rates if implemented on top of current distributed payment systems, even in the smart-contract setting. Although promising, all rollup approaches require the resolution of an additional yet crucial question. For protocols operating in the permissionless setting, we need to ensure that a system relying on proof generation to scale also incentivizes actors to compute proofs cheaply and quickly. This is a governance problem, as the protocol needs to decide on how participants will be chosen to perform these (expensive) computations. We pose the question of who will compute the proofs, identify it as a consensus problem and provide a technical proposal towards its resolution.
Our main contributions are twofold: in Part I, we design a
permissionless consensus protocol that solves the problem of state
verification for resource-limited clients in an incentive-compatible way. We show formal proofs of security and achieve minimal resource requirements for full ledger verification. This protocol showcases our key contribution: the design of a proof-of-work (PoW) process that computes SNARK proofs as valid outputs. Suitably choosing the statement whose proof is generated through PoW provides an incentive-compatible way to enforce the computation required by proof-based scaling techniques. In Part II, we look at one of the key components of SNARK- based throughput optimization: the non-interactive proof itself. We design a novel proof system which provides security guarantees in the trustless setting, while still being small and efficiently computable.
This proof system (a transparent SNARK, or STARK) can be used directly for scaling throughput in distributed payments through transaction rollups. In conjunction with an incentivized PoW process, it also demonstrates a way for participants in consensus to quickly generate the rollup proofs in a permissionless way.
-
Ph.D. Thesis
2022
Characterizing and Resolving Degeneracies in Neural Autoregressive Text Generation
Kulikov, Ilia
Abstract
|
PDF
Title: Characterizing and Resolving Degeneracies in Neural Autoregressive Text Generation
Candidate: Kulikov, Ilia
Advisor(s): Kyunghyun Cho, Jason Weston
Abstract:
Autoregressive neural networks have shown great success as part of the sequence to sequence framework solving a diverse set of sequence generation tasks. These tasks include machine translation, dialogue modeling, question answering, text summarization, and sequence completion. In spite of the visible success, many challenges remain to be solved and are reported across these tasks. These challenges are usually discussed as visible deviations in the predicted sequence compared to the given reference. It is, however, not always possible to do the comparison, because interactive tasks, such as dialogue modeling, do not come together with reference sequences in the middle of the conversation at the test time. We refer to such deviations as \textit{degeneracies} which result in degenerate sequences. In this thesis, we work on reducing widely reported degeneracies within specific tasks or in text generation in general. To do so, we often first need to formulate the degeneracy in a measurable way and hypothesize what is the major cause behind it.
We investigate the issue of oversmoothing, where the model assigns high probability to overly short sequences. We address this degeneracy from the learning aspect by proposing a novel regularization which minimizes the newly proposed oversmoothing rate directly. We show the effectiveness of the proposed method in the context of neural machine translation. Still concentrating on the learning aspect, we next address the problem of repetition in the context of sequence completion, where the generated sequences have unreasonably many repetitive substrings compared to the ones we see in the data. We propose a novel unlikelihood training procedure which allows to penalize undesired continuations, such as repetitive substrings. Unlikelihood training significantly reduces the number of repetitions and improves the naturalness of the generated continuations. One issue with the repetition degeneracy is that it can also lead to non-termination. We study if the original model is able to terminate the repetitive loop itself even if we do not enforce the maximum generated length during decoding. We connect this problem of non-termination with the consistency of the distribution induced by the chosen decoding algorithm. After proving that an incomplete decoding algorithm, such as beam search, may induce the inconsistent distribution when paired with a consistent model, we propose an alternative parametrization which guarantees the decoding-induced distribution to be consistent. After that, we switch to a more complicated scenario of conversation modeling, where the model has to generate a response in a multi-turn setting. We investigate the issue of unengaging or dull responses by highlighting the importance of the decoding algorithm. We observe a low diversity of beam search candidates compared to iterative beam search which explores a wider search subspace via efficient pruning. We find that the selection criterion is as important as the decoding strategy. Along the way, we stress the importance of careful human evaluation in the presence of annotator bias and calibrate the observed scores using Bayesian inference. While we address different kinds of degeneracy, the list we tackle is not exhaustive. For instance, neural machine translation is known to produce hallucinated translations or copy large parts of the input sentence. Furthermore, degeneracies exist past autoregressive modeling in both non-autoregressive and semi-autoregressive settings. We believe our contributions will be helpful for future research solving new problems. -
Ph.D. Thesis
2022
Finding and Fixing Undesirable Behaviors in Pretrained Language Models
Perez, Ethan
Abstract
|
PDF
Title: Finding and Fixing Undesirable Behaviors in Pretrained Language Models
Candidate: Perez, Ethan
Advisor(s): Kyunghyun Cho
Abstract:
Natural Language Processing (NLP) promises to deliver tools for a variety of impactful applications, ranging from automatic summarization to question-answering systems and conversational assistants. Recently, NLP has been revolutionized by the advent of Pretrained Language Models (PLMs). We train PLMs using "self-supervised" learning objectives -- prediction tasks that operate on unlabeled text alone, such as next word prediction or missing word prediction. As a result, PLMs are able to learn from large quantities of internet text, to obtain strong performance on many NLP tasks.
Despite the success of self-supervised objectives, they face a fundamental limitation: they train PLMs to behave in ways that are misaligned with human preferences. PLMs learn to repeat internet misinformation, offensive jokes, and personal contact information, and it is hard to control or guide the text that PLMs generate. Next, we show that PLM-based classifiers are effective at predicting which texts humans prefer. As a result, it is possible to use such classifiers as a learning signal to automatically correct the PLM. We showcase this approach to train a high-quality retrieval system, obtaining strong performance across a variety of tasks using Retrieval-Augmented Generation (RAG). Even after such training schemes, some undesirable behaviors may remain undetected during training. We thus go a step further and generate inputs that elicit undesirable behaviors from the PLM using other PLMs, to preemptively find and fix such behaviors. Overall, we find that some of the most powerful tools for aligning PLMs with human preferences are PLMs themselves.
-
Ph.D. Thesis
2022
Identifying, Addressing, and Understanding Challenging Cases in Machine Learning
Resnick, Cinjon
Abstract
|
PDF
Title: Identifying, Addressing, and Understanding Challenging Cases in Machine Learning
Candidate: Resnick, Cinjon
Advisor(s): Kyunghyun Cho/Joan Bruna
Abstract:
Machine learning has advanced tremendously this past decade. Object
detection systems routinely perform beyond human-level accuracy with
no loss in speed, game-playing agents play at superhuman level in real
time, and generative models write language useful enough for
downstream products. And yet, autonomous vehicles (AV) crash due to
surprising mistakes, the best gaming agents lose to simple strategies,
and our language models produce nonsensical utterances at a
surprisingly high rate. I could have chosen examples from any field
because these failures are not endemic to just vision, games, or
language. There are always challenging cases remaining after training
our system, and these cases are where the systems fail.This thesis focuses on the challenging cases in a machine learning
system in order to improve its overall capabilities. In the first
part, we study methods for identifying the challenging cases, an
important precursor for improving the system. In the second part, we
then study methods for addressing the challenging cases, arguably the
most important part of this thesis for real-world applicability. And
in the third part, we study methods for understanding the root cause
of challenging cases, an important step in attaining guarantees about
our system's capabilities. As machine learning is practiced in many
different settings, our study does too. We explore these questions in
the context of computer vision, language learning, and task learning.
The connecting thread among them is the drive towards creating a
communicative and visually aware robot that can capably complete
household tasks. In that context, we present in parallel the Machine
Learning Application Framework that highlights where our contributions
improve downstream applications.All together, this work studies how to identify, address, and
understand the most challenging cases over a diverse array of machine
learning systems. This research is imperative towards deploying many
systems that we care about, including most autonomous vehicles and
health assistants. Consequently, it represents an important step
towards society's technological goals. -
Ph.D. Thesis
2022
Constrained Surface Parameterization Methods with Guarantees
Shen, Hanxiao
Abstract
|
PDF
Title: Constrained Surface Parameterization Methods with Guarantees
Candidate: Shen, Hanxiao
Advisor(s): Denis Zorin/Daniele Panozzo
Abstract:
Surface parameterization for piecewise-linear surfaces is a
fundamental problem in computer graphics and geometry processing. The
generation of surface parameterization is a key step in numerous
applications like texture mapping, remeshing, quadrangulation,
inter-surface mapping, and shape-analysis.Due to its popularity, the robustness of mapping
generation methods plays a major role in its applicability. In
addition, depending on the specific requirements of the application at
hand, various formulations of constraints are used to control or guide
the parameterization.
Typical examples of the constraints are point constraints, curvature
constraints, and topological constraints. In many practical cases, to
ensure that the input assumptions of downstream algorithms are
satisfied, the constraints, such as need to be imposed exactly (as
opposed, e.g., to approximation via penalties).In this work, we investigate different
constraint formulations suitable for various applications and present
algorithms with guarantees to generate parameterization fully
satisfying these constraints. In the first part of this thesis, we
develop an algorithm that solves the
classical problem of mapping a disk domain with boundary constraints;
in the special case of domains with convex boundary, it improves, in
terms of robustness, on the classical Tutte's algorithm. Utilizing it
as a building block, we design a parameterization method that supports
arbitrary positional constraints. In the second part, building on
recent developments in the theory of discrete uniformization, we
develop a highly robust algorithm for discrete conformal maps that
satisfy prescribed curvature constraints. In the third part, we
provide a constructive proof for the existence of globally seamless
parameterization that matches admissible user-prescribed cone position
and curvature constraints. Lastly, we generalize this to constraints
on holonomy angles on a homology basis of loops, which fully capture
the topology of seamless parameterizations. This method yields
parameterizations that are very close to field-aligned
parametrizations obtained using commonly used methods but, in contrast
to these methods, guarantees the existence of solution satisfying all
constraints. -
Ph.D. Thesis
2022
On deep learning tools for scientific discovery in healthcare
Sudarshan, Mukund
Abstract
|
PDF
Title: On deep learning tools for scientific discovery in healthcare
Candidate: Sudarshan, Mukund
Advisor(s): Rajesh Ranganath/Oded Regev
Abstract:
Scientists validate hypotheses by building mathematical models of the
real world. They make inferences by checking if their models are
supported by data. Often, the models are hand-crafted and do not
accurately reflect real processes. This often leads to low power to
make scientific discoveries or even false discoveries.
Machine learning can solve these issues in several ways. By allowing
data to inform the construction of models, scientists can use machine
learning to create more powerful statistical hypothesis testing
procedures, or build more realistic models of underlying processes.
This thesis details techniques to address both of these approaches.
First we address the creation of machine learning-based statistical
discovery procedures for scientific discovery. Specifically, we
discuss how machine learning can be used to construct conditional
independence tests, which are used to identify causal links in data.
We detail how such methods can be used to control the false discovery
rate when testing multiple hypotheses. We then apply these techniques
to two important domains. We solve a timely problem in medical
informatics: identifying a small set of variables that are highly
informative of whether an ICU patient with Covid will experience an
adverse event. At the height of Covid in 2020, NYU doctors used a
deployed version of this tool to quickly identify patients to
discharge and free up beds in the ICU. We also apply our methods to a
problem in cancer genomics, where the goal is to identify a set of
gene mutations that are most predictive of tumor metastasis. In the
near future, we expect tools like ours to lead to targeted gene
therapies that tailor treatments to the mutations present in an
individual's tumor.
Next we detail the construction of an interpretable machine learning
model that helps understand an important step in the creation of
proteins. Specifically, we build a model to understand RNA splicing,
which involves removing non-coding regions from precursor messenger
mRNA (pre-mRNA) and joining coding regions together. Our model
accurately models splicing outcomes across a large dataset of
sequences, but more importantly leads to several biologically
validated insights. We use the interpretable nature of our model to
infer that most splicing decisions are a function of a small set of
short sequence features. We also learn that certain pre-mRNA secondary
structures strongly inhibit the inclusion of a coding region in the
final mRNA transcript. Finally, we validate these model-driven
findings by carefully designing experiments for the wet lab. -
Ph.D. Thesis
2022
Efficient Verification of Untrusted Services
Tzialla, Ioanna
Abstract
|
PDF
Title: Efficient Verification of Untrusted Services
Candidate: Tzialla, Ioanna
Advisor(s): Michael Walfish
Abstract:
Using a third-party service today requires trusting that it is executing as promised. Meanwhile, the correct execution of services is regularly impeded by failures, bugs, misconfigurations, operational mistakes, and insider attacks. Is it possible to verify, instead of trust, that a third-party service executes correctly?
We study this question for two services that execute on remote servers: transparency dictionaries, a foundational infrastructure for end-to-end encryption and other applications, and event-driven web applications. For each of these two services, we leverage their workloads to introduce a practical system that allows a verifier to get a strong security guarantee that the service executes correctly.
In the case of a transparency dictionary, this guarantee is in the form of a cryptographic proof provided by the service. Producing cryptographic proofs typically requires high resource costs. We show that tailoring the cryptographic tools used by the transparency dictionary for its use case mitigates these costs and results in a system, Verdict, that scales to dictionaries with millions of entries while imposing modest overheads on the service and its clients.
In the case of outsourced event-driven web applications, the verifier gets the required guarantee by replaying the requests on a trusted machine using Karousos, a novel record-replay system in which the service has the role of the untrusted recorder. Karousos takes advantage of the particular characteristics of event-driven web applications to enable the replayer (the verifier) to use less computational resources than the recorder (the service), while imposing tolerable overheads on the
recorder and keeping communication small. -
Ph.D. Thesis
2022
NLP Evaluation in the Time of Large Language Models
Wang, Alex
Abstract
|
PDF
Title: NLP Evaluation in the Time of Large Language Models
Candidate: Wang, Alex
Advisor(s): Kyunghyun Cho
Abstract:
The field of natural language processing (NLP) has been
>dramatically impacted by the creation and proliferation of large
language models that are pretrained on Internet-scale text data. These
models have led to significant improvements on a myriad of NLP tasks.
However, as the capabilities of these models drive up performance on
existing task benchmarks, there is a critical need for evaluation
metrics that are up-to-date with current models. In this dissertation,
we develop NLP evaluation methodologies that benchmark and leverage
pretrained language models. We first present two multi-task benchmarks
for evaluating the generalization ability of NLP models and discuss
the role of these benchmarks in the development of large language
models. Next, we demonstrate that we can leverage the capabilities of
pretrained language models to develop new automatic evaluation metrics
that better measure the semantics of model-generated text.
Specifically, we make use of the question answering abilities of
pretrained models to evaluate the faithfulness of automatically
generated summaries. Finally, we explore methods for crowdsourcing
high-quality and challenging text generation data to address issues of
data quality that have been surfaced by the ability of language models
to replicate noise in benchmark datasets. Overall, we show that the
rise of pretrained language models presents both challenges and
opportunities in how we evaluate NLP systems, and that incorporating
these very models into our evaluation methodologies offers a promising
path forward. -
Ph.D. Thesis
2022
Improving Sample Efficiency in Off-policy and Offline Deep Reinforcement Learning
Wu, Yanqiu (Autumn)
Abstract
|
PDF
Title: Improving Sample Efficiency in Off-policy and Offline Deep Reinforcement Learning
Candidate: Wu, Yanqiu (Autumn)
Advisor(s): Keith Ross
Abstract:
Reinforcement Learning (RL) is an area of Machine Learning, where agents are trained through trial and error to make a sequence of decisions in some given environment to achieve a goal. Traditional reinforcement learning methodology suffers from the curse of dimensionality. Fortunately, with the help of deep learning, Deep Reinforcement Learning (DRL) can overcome the issue and can often find high performing policies for applications with large state and action spaces. Over the past few years, DRL has achieved major breakthroughs in complex tasks, such as outperforming human players in video games [Mnih et al. 2013; Vinyals et al. 2019], defeating the human world champion in Go [Silver et al. 2016, 2018] and autonomous robotics control [Lillicrap et al. 2019; Haarnoja et al. 2018a].
Despite the recent breakthroughs, sample efficiency remains an important issue in deep reinforcement learning. In some complex tasks, where data collection is very expensive and agents require relatively few interactions with the environment for training, sample efficiency is of central concern for making DRL practical for applications. This thesis addresses the sample efficiency problem in the context of off-policy and offline Deep Reinforcement Learning. We develop training algorithms which not only lead to high asymptotic performing policies, but are also highly sample efficient in both on-line and offline settings. We demonstrate the performance of our methods in simulated robotic locomotion environments.
In the first part of this thesis, we develop a streamlined off-policy algorithm that utilizes an output normalization scheme and non-uniform sampling. We identify the squashing exploration problem and show how maximum entropy DRL [Haarnoja et al. 2018a,b] helps to resolve it. Based on our observation, we develop an alternative output normalization scheme to maximum entropy algorithms. We show that this normalization scheme can then be combined with non-uniform sampling, resulting in high performing policies. Next, we develop a simple off-policy algorithm that takes advantage of a high update-to-data (UTD) ratio and Q-ensembles which demonstrates superior sample efficiency in early-stage training and also achieve high asymptotic performance in late-stage training. We employ Q-ensembles and keep several lowest values for updating to address the overestimation bias. Finally, we consider offline deep reinforcement learning. We introduce the novel notion of “upper envelope of the data” and then develop an Imitation-Learning based algorithm based on the notion. Our algorithm is computationally much faster and achieves state-of-the art performance.
-
Ph.D. Thesis
2022
On-Policy Deep Reinforcement Learning — The Discounted and Average Reward Criteria
Zhang, Yiming
Abstract
|
PDF
Title: On-Policy Deep Reinforcement Learning — The Discounted and Average Reward Criteria
Candidate: Zhang, Yiming
Advisor(s): Keith Ross
Abstract:
Reinforcement Learning (RL) is the study of sequential decision making where an agent attempts to maximize its overall cumulative reward in some given environment. Combined with deep learning, reinforcement learning has made remarkable strides in the past decade in complex tasks such as playing video games (Mnih et al. 2013, Vinyals et al. 2019), playing Go (Silver et al. 2016, 2018), robotics (Lillicrap et al. 2016, Haarnoja et al. 2018), and chip design (Mirhoseini et al. 2021). However despite these successes, modern RL algorithms often suffer from poor sample efficiency and lack of safety guarantees. In this thesis we tackle these issues in the context of on-policy Deep Reinforcement Learning (DRL), both theoretically and algorithmically. This work addresses both the discounted and average reward criteria. In the first part of this thesis, we develop theory for average reward on-policy reinforcement learning by extending recent results for local policy search. We show that previous work based on the discounted return (Schulman et al. 2015, Achiam et al. 2017) results in a non-meaningful bound in the average-reward setting. By addressing the average-reward criterion directly, we derive a novel bound which depends on the average divergence between the two policies and Kemeny's constant. Based on this bound, we develop an iterative procedure which produces a sequence of monotonically improved policies for the average reward criterion. We show that this iterative procedure can then be combined with classic deep reinforcement learning methods, resulting in practical DRL algorithms that target the long-run average reward criterion. Next, we develop a unifying framework for the on-policy sample efficiency problem. This methodology uses a two-step approach which first learns an optimal policy in the non-parameterized policy space before projecting said policy back into the parameter space. Our approach is general in that it applies to both discrete and continuous action spaces, and can handle a wide variety of proximity constraints. Finally we address the problem of reinforcement learning with safety constraints. We provide theoretical support that trust region-based methods can be applied to problems with both discounted and non-discounted cost constraints. We then propose a novel first-order algorithm for policy optimization for maximizing an agent's cumulative reward while at the same time satisfying a set of cost constraints. Our algorithm is extremely simple to implement and has an approximate upper bound for worst-case constraint violation throughout training.
-
Ph.D. Thesis
2021
Advances in computer bridge: techniques for a partial-information, communication-based game.
Bethe, Paul
Abstract
|
PDF
Title: Advances in computer bridge: techniques for a partial-information, communication-based game.
Candidate: Bethe, Paul
Advisor(s): Ernest Davis
Abstract:
Bridge is an imperfect information game with elements of competition
against opponents as well as cooperation with a partner. Despite the
application of many tenets of artificial intelligence, humans have yet
to be consistently bested by the computer. This thesis explores AI
shortcomings in both the play and bidding phases of the game. In the
play, we explore weaknesses in the cutting edge Monte Carlo techniques
and explore both inference and learning based solutions. In the bidding,
we go beyond existing rule based systems and investigate deep
reinforcement learning as a method to learn how to bid. -
Ph.D. Thesis
2021
Learning Causality in Molecular Biology
Cirrone, Jacopo
Abstract
|
PDF
Title: Learning Causality in Molecular Biology
Candidate: Cirrone, Jacopo
Advisor(s): Dennis Shasha
Abstract:
The Systems Biology community has invested a great deal of effort in
modeling gene regulatory networks that should be able to (i) accurately
predict future states and (ii) identify regulatory hubs that can be
manipulated to achieve desired phenotypes. Most computational tools for
the problem embody linear models (e.g. 5 * TF1 + 2*TF2 - 0.4*TF3....).
However, it is well known that biological interactions are highly
synergistic and non-linear. Further, those tools mostly try to directly
predict networks even when the discovered edges (which usually come from
some assay such as Chip-seq) may have little physiological significance
(e.g., may not influence gene expression).This thesis considers an alternative approach to inferring gene
causality. Specifically, we consider the problem of predicting the
expression of genes at a future time point in a genomic time series. In
this, we follow the philosophy that accurate prediction often
corresponds to a good understanding of causality.
The prediction may rest on several sources of data: the time point
immediately preceding t, the entire target time series preceding t, and
ancillary data. In biology, for example, the ancillary data may consist
of a network based on binding data, data from different time series,
steady state data, a community-blessed gold standard network, or some
combination of those. We introduce OutPredict, which is a machine
learning method for time series that incorporates ancillary steady state
and network data to achieve a low error in gene expression prediction.
We show that OutPredict outperforms several of the best state-of-the-art
methods for prediction. The predictive models OutPredict in turn
generate a causal network.Thus, this thesis presents an approach to the inference of causality
based on predictions of out-of-sample time-points based on both steady
state and time series data. Because the model for each gene identifies
those transcription factors that have the most importance in prediction,
those important transcription factors are the most likely causal
elements for that gene. We validate those predictions for a set of
well-documented transcription factors in Arabidopsis.
Because our methods apply to any situation in which there is time series
data, ancillary data, and the need for non-linear causal models, we
believe that this work will have a broad appeal to the scientific
community, specifically those studying causality networks in any
biological system. -
Ph.D. Thesis
2021
Responsibility Analysis by Abstract Interpretation
Deng, Chaoqiang
Abstract
|
PDF
Title: Responsibility Analysis by Abstract Interpretation
Candidate: Deng, Chaoqiang
Advisor(s): Patrick Cousot
Abstract:
Given a behavior of interest, automatically determining the corresponding responsible entity (or say, the root cause) is a task of critical importance in various scientific fields, especially in the program static analysis. Classical static analysis techniques (e.g. dependency analysis, taint analysis, slicing, etc.) assist programmers in narrowing down the scope of responsibility, but none of them can explicitly identify the responsible entity. Meanwhile, the causality analysis is generally not pertinent for analyzing programs, and the structural equations model (SEM) of actual causality misses some information inherent in programs (e.g. temporal information, and whether an entity is free to make choices or not), making the corresponding program analysis imprecise.
In this dissertation, inspired by a classic forest fire example used in defining causality, a novel definition of responsibility based on the abstraction of trace semantics is proposed, which is expressive and generic to cope with both program analyses and tasks in other scientific fields. Briefly speaking, an action aR is responsible for behavior B in a certain trace, if and only if aR is free to make choices, and such a choice is the first one that ensures the occurrence of B in that trace. Such a definition makes use of the information regarding the temporal ordering of actions, as well as whether an action has free choices or not. In addition, our definition of responsibility takes into account the cognizance of observer, which, to the best of our knowledge, is a new innovative idea in program analysis. Compared to current dependency and causality analysis methods, the responsibility analysis is demonstrated to be more precise in many examples.
Furthermore, this dissertation proposes a sound framework of abstract responsibility analysis, which allows a balance between cost and precision to solve the undecidable problem of responsibility. Essentially, the abstract analysis builds a trace partitioning automaton by an iteration of over-approximating forward reachability analysis with trace partitioning and under-approximating/over-approximating backward impossible failure accessibility analysis, and determines the bounds of potentially responsible entities along paths in the automaton. Unlike the concrete responsibility analysis identifies exactly a single action as the responsible entity along every concrete trace, the abstract analysis may lose some precision and find multiple actions potentially responsible along each automaton path. However, the soundness is preserved, and every responsible entity in the concrete is guaranteed to be also found responsible in the abstract.
- TR2021-996 2021 Quantum Information Physics: 1 Geiger, Davi; Zvi M. Kedem Abstract | PDF
- TR2021-997 2021 Quantum Information Physics: 2 Geiger, Davi; Zvi M. Kedem Abstract | PDF
-
Ph.D. Thesis
2021
Enhancing Collaboration and Productivity for Virtual and Augmented Reality
He, Zhenyi
Abstract
|
PDF
Title: Enhancing Collaboration and Productivity for Virtual and Augmented Reality
Candidate: He, Zhenyi
Advisor(s): Ken Perlin
Abstract:
Immersive environments such as Virtual Reality (VR) and Augmented Reality (AR) are now receiving more and more attention. Although VR and AR have largely been used for individual entertainment experiences, they also possess huge potential as a platform for the support of collaboration and productivity. My thesis work is concerned with enabling VR/AR to be flexibly adapted for collaborative and productive uses. I approach this scope from several facets: a new haptic user interface based on actuated robots to bridge virtual and physical world, a reconfigurable framework for both co-located and geographically dispersed multi-user communication, and a text entry system in which users type by tapping their fingers, without needing to look at their hands or be aware of their hand positions. Further, I extend these ideas to a daily video conferencing experience that requires minimal hardware.
-
TR2021-998
2021
DietVision : An App for Image-based Food Identification, Volume, and Nutrition Estimation
Hofmann, Michael;
Leopold Maillard; Jessica Ramaux; Dennis Shasha
Abstract
|
PDF
Title: DietVision : An App for Image-based Food Identification, Volume, and Nutrition Estimation
Author(s): Hofmann, Michael; Leopold Maillard; Jessica Ramaux; Dennis Shasha
Abstract:
DietVision is a mobile app that provides an estimate of the nutritional content of a meal
from images.
The software provides the following functions: (i) food detection that performs
classification and assigns it to a major food group; (ii) volume estimation using two
images at different angles of a plate along with a coin as a fiducial marker; and (iii)
user feedback to correct errors in steps (i) and (ii). -
Ph.D. Thesis
2021
Larger-Context Neural Machine Translation
Jean, Sébastien
Abstract
|
PDF
Title: Larger-Context Neural Machine Translation
Candidate: Jean, Sébastien
Advisor(s): Kyunghyun Cho
Abstract:
Translation helps connect people by bridging language barriers. It can make travel more enjoyable, allow our minds to explore imaginary worlds, let us talk to others, and so on. Given the need for translation, but the limited availability of human translators, machine translation has flourished. Most systems translate sentences one by one, ignoring its context, which isn't always sufficient as the missing information can lead to incorrect or inconsistent translations. We believe that neural machine translation (NMT) is particularly well-suited to incorporate the surrounding context. Indeed, NMT systems can attend to arbitrarily distant words, while the use of continuous representations improves generalization on unseen examples.
As such, in this thesis, we extend neural machine translation to leverage information from the surrounding context. To do so, we first highlight the potential of the then-nascent NMT paradigm. We subsequently introduce architectural changes to integrate information from the surrounding document, initially starting from the preceding sentence. We further encourage models to use context from either a learning or data augmentation perspective. We also consider the efficient use of document-level neural language models for this task. While some challenges still remain, our work has helped establish larger-context translation on a solid footing, and we are optimistic about future progress.
-
Ph.D. Thesis
2021
Improving Sample Efficiency of Imitation and Reinforcement Learning
Kostrikov, Ilya
Abstract
|
PDF
Title: Improving Sample Efficiency of Imitation and Reinforcement Learning
Candidate: Kostrikov, Ilya
Advisor(s): Rob Fergus
Abstract:
Reinforcement Learning (RL) is an area of machine learning focused on learning to make a sequence of actions in an environment that maximizes cumulative rewards. Combined with Deep Learning, Reinforcement Learning has made significant progress over the last decade across various domains. Notable successes include achieving superhuman performance on Atari games, Go, StarCraft II, Dota 2, and various continuous control tasks.
However, RL's success stories are often limited to games and simulations where it is possible to generate a large amount of training data. This thesis describes several methods focused on improving sample efficiency to enable a wider variety of RL applications. For the first half of the thesis, we focus on Imitation Learning, where ground truth rewards are usually unknown, and expert demonstrations define optimality. First, we introduce a method for robust and sample efficient imitation learning. We adapt an imitation learning approach where an agent tries to mimic a domain expert using a GAN-like framework called GAIL. We identify two primary sources of sample inefficiency associated with this approach: on-policy RL and GAN discriminator training. We show that sample inefficiency can be mitigated by performing off-policy RL training combined with off-policy training of the discriminator. We also identify and resolve some task-specific biases associated with the family of adversarial imitation learning algorithms based on GAIL. Then, we derive a principled off-policy formulation of robust imitation learning that is entirely offline and allows us to learn a policy that imitates the expert relying only on the previously collected data. This work concludes the part of the thesis focused on imitation learning, and for the rest of the thesis, we focus on online and offline RL where we have access to environment rewards. We observe that off-policy RL from pixels suffers from overfitting and propose a simple solution inspired by image augmentation techniques from Computer Vision. Finally, we introduce a method for offline RL that utilizes a pre-trained behavioral policy to improve the robustness of behavior regularization widely used in the context of offline RL. In contrast to prior work on Offline RL, this method utilizes the behavior policy to regularize the critic instead of constraining the training policy.
-
Ph.D. Thesis
2021
Latent Variable Models and Iterative Refinement for Non-Autoregressive Neural Machine Translation
Lee, Jason
Abstract
|
PDF
Title: Latent Variable Models and Iterative Refinement for Non-Autoregressive Neural Machine Translation
Candidate: Lee, Jason
Advisor(s): Kyunghyun Cho
Abstract:
Deep neural networks have fundamentally transformed the field of machine translation, and replaced statistical phrase-based approaches to serve translations to millions of users in production systems every day. Despite impressive progress in translation accuracy, improving decoding speed remains a key challenge as most systems are \emph{autoregressive} and generate a sentence word-by-word. As neural machine translation (NMT) models are becoming increasingly deep and complex, there is a growing need for more efficient translation systems with sub-linear or constant inference latency, with respect to the sentence length. The main challenge in non-autoregressive machine translation is capturing the dependencies between tokens in a target sentence without autogression. Motivated by a rich history of probabilistic graphical models in sequence generation, this thesis proposes to use latent variables to model intra-sentence dependencies, such that the output distribution can be factorized given the latent variables. We also present several inference algorithms for non-autoregressive machine translation based on iterative refinement, which revises a sentence over multiple iterations. Our non-autoregressive models based on latent variables and iterative refinement can deliver significant decoding speedup with comparable translation accuracy relative to a strong autoregressive baseline. Finally, we investigate the correlation between training (log-likelihood) and test objective (BLEU) of several model families. We observe the two metrics are not correlated when comparing models from different families (e.g. between autoregressive and latent variable models).
-
Ph.D. Thesis
2021
Neural Structured Prediction using Iterative Refinement with Applications to Text and Molecule Generation
Mansimov, Elman
Abstract
|
PDF
Title: Neural Structured Prediction using Iterative Refinement with Applications to Text and Molecule Generation
Candidate: Mansimov, Elman
Advisor(s): Kyunghyun Cho
Abstract:
Humans excel at generating structured data in the form of images, text, speech, molecules, computer code, and others. Researchers have spent several decades proposing various solutions for the effective generation of these structured objects in a data-driven way, known as structured prediction. With the revival of deep neural networks, autoregressive models that process structured objects in fixed left-to-right monotonic ordering became a de-facto solution for this problem. Notable successes of autoregressive models include neural machine translation [Sutskever et al., 2014, Bahdanau et al., 2014, Vaswani et al., 2017], open-ended text generation [Radford et al., 2019, Brown et al., 2020], text-to-speech synthesis [van den Oord et al., 2016], among many.
Despite the considerable success of autoregressive models on many applications, a natural question arises whether alternative approaches are possible for structured prediction. This thesis describes a novel method for structured prediction based on the principle of iterative refinement with a particular focus on applications to text and molecule generation. We first introduce the iterative refinement framework for text generation. Starting from the blank sentence, the iterative refinement approach gradually refines text over multiple steps. Using this approach, we show that we can flexibly generate the text in various ways, such as generate all or some words in parallel and generate text according to the ordering learned from the data. We show that iterative refinement achieves competitive performance compared to autoregressive models while delivering a speedup in decoding. We conclude this thesis by showing how we can adapt the iterative refinement framework originally introduced for text generation for molecule generation. In particular, we demonstrate two iterative refinement approaches for molecular graph generation and molecular geometry prediction. We anticipate that models based on the iterative refinement will be broadly applicable to other domains of interest.
-
Ph.D. Thesis
2021
Scalable Particulate Flow Simulations with Boundary Integral Equations
Morse, Matthew
Abstract
|
PDF
Title: Scalable Particulate Flow Simulations with Boundary Integral Equations
Candidate: Morse, Matthew
Advisor(s): Denis Zorin
Abstract:
Numerical simulation of complex particulate flows, and of red blood cell flows through capillaries in particular, is an important investigational tool in the biological sciences. The ability to rapidly evaluate the impact of vessel and cell geometries, plasma viscosity, and particulate densities on macroscopic physiology is crucial to pursuing further biological understanding. Experimental techniques are costly and time-consuming, while analytical approaches are often of limited practical use in realistic scenarios, ultimately underscoring the importance of a comptuational approach.
In this work, we construct such a simulation, capable of simulating microliters of blood flowing through realistic vasculature, along with more general particulate suspensions. Due to the micrometer length scales of typical capillaries, we can model the blood plasma as a Stokesian fluid and red blood cells as inextensible, deformable membranes. By reformulating the viscous flow as a set of boundary integral equations, we are able to produce a method that has optimal complexity with high-order accuracy that is capable of handling dense particulate suspensions in complex geometries.
This approach relies on a novel, robust solver for elliptic partial differential equations, applied to Stokes flow. A core component of the solver is a novel fast algorithm to compute the value of the solution near and on the domain boundary, which we have named \qbkix. We provide a set of algorithms to guarantee the accuracy of \qbkix on piecewise smooth surfaces, discuss the error behavior and complexity of \qbkix, and evaluate its performance.
Leveraging this solver in a confined blood flow simulation involves advecting deformable particulates along the flow trajectory. Large timesteps are required for an efficient simulation, but can cause collisions among cells and with the vessel wall if performed naively. We present collision detection and resolution algorithms for the red blood cells and the blood vessel. We parallelize \qbkix and the collision algorithms and scale the final simulation to nearly 35,000 cores.
-
Ph.D. Thesis
2021
Towards More General and Adaptive Deep Reinforcement Learning Agents
Raileanu, Roberta
Abstract
|
PDF
Title: Towards More General and Adaptive Deep Reinforcement Learning Agents
Candidate: Raileanu, Roberta
Advisor(s): Rob Fergus
Abstract:
Building agents with general skills that can be applied in a wide
range of settings has been a long-standing problem in machine
learning. The most popular framework for training agents to make
sequential decisions in order to maximize reward in a given
environment is Reinforcement Learning (RL). Over the last decade, deep
reinforcement learning, where RL agents are parameterized by neural
networks, has achieved impressive results on a number of tasks, from
games such as Atari, Go, StarCraft, or Dota, to continuous control
tasks with applications in robotics._However, current RL agents are prone to overfitting and struggle to
generalize when even minor perturbations are applied to the training
environment. This hinders progress on real-world applications such as
autonomous vehicles or home robots, where agents need to deal with a
large variety of scenarios. In this thesis, we introduce several
methods for improving the versatility of deep reinforcement learning
agents. We start by studying the problem of zero-shot generalization
to new instances of a task after training on a limited number of
environments. We first propose an approach for regularizing the policy
and value function of a RL agent and automatically finding an
effective type of data augmentation for a given task. We also identify
that there is an asymmetry between the information needed to represent
the optimal policy and the true value function, which leads to
overfitting when using standard deep RL algorithms. As a step towards
solving this problem, we propose a method which decouples the
optimization of the policy and value, and constrains the
representation to be invariant to the task instance. Next, we focus on
the problem of learning general exploration strategies for
procedurally generated environments with sparse rewards. We formulate
a new type of intrinsic reward which encourages agents to impact their
environments and show that it outperforms other popular exploration
methods. Then, we discuss a novel approach for fast adaptation to new
dynamics. We show that our method, which leverages self-supervised
techniques to learn policy and environment embeddings, enables
adaptation within a single episode on a number of continuous control
tasks. Finally, we investigate how agents can learn more flexible
strategies for interacting with different opponents and collaborators. -
Ph.D. Thesis
2021
Theory and Algorithms for Several Central Problems in Large-Scale Machine Learning
Storcheus, Dmitry
Abstract
|
PDF
Title: Theory and Algorithms for Several Central Problems in Large-Scale Machine Learning
Candidate: Storcheus, Dmitry
Advisor(s): Mehryar Mohri
Abstract:
This Ph.D. dissertation presents fundamental analaysis of several central problems in large-scale machine learning. We derive novel, scalable algorithms supported by strong theoretical guarantees for the most practically important large-scale learning scenarios. These scenarios include extentions of the standard supervised learning to multiple base hypotheses spaces, multiple objective functions, multiple distributions, multiple classes and high-dimensional feature spaces.
A standard supervised learning scenario consists of fitting a predictor from a fixed hypotheses space that minimizes certain empirical loss on a sample drawn i.i.d. from a particular distribution. The richness of modern machine learning applications requires the learning scenario to be large-scale by having the ability to learn from many training examples. While scalability in terms of many examples is widely studied, the current state of research in the field overlooks other scenarios and directions for scalability that may be even more important that many training examples. For instance, by allowing the learner to select predictors from multiple hypotheses spaces of varying complexity, or fit to multiple objective functions.
While the problems mentioned above may seem to relate to separate aspects of large-scale learning, this thesis provides a unified theoretical analysis framework that brings these central problems together. This framework is based on the Rademacher complexity analysis as well as on the Empirical and Structural Risk Minimization principles.
-
Ph.D. Thesis
2021
The Evolutionary Maps of Data
Tamaskar, Abhinav
Abstract
|
PDF
Title: The Evolutionary Maps of Data
Candidate: Tamaskar, Abhinav
Advisor(s): Bud Mishra
Abstract:
We present a geometric view of analyzing temporal causal models from the perspective of topology and limit graphs. We will briefly cover an intuitive overview of the topological techniques used and the theory of limit graphs. We will then briefly describe the Suppes Bayes causal networks which are used as the temporal causal models. We briefly describe evolutionary models used in scientific literature and show an efficient model for performing simulations on generalized large scale evolutionary networks. We then present the techniques for analyzing large scale evolutionary populations, and showcase their generality through two real world examples, (1) with the linguistic data from Reddit over the course of 5 years and while showing the existence of echo chambers and giving a metric to analyze similarities of populations over time, and (2) through the TCGA and COSMIC dataset for cancer mutation of over 11,000 genes and by using an approximation metric on the space of causal models to find similar cancer types, to perform transfer learning to boost survival forecasting through blackbox learning models.
-
TR2021-999
2021
A Microservice Redesign of Search and Inference for the Linguistic Website Terraling
Vasandani, Shailesh;
Hannan Butt; Dennis Shasha
Abstract
|
PDF
Title: A Microservice Redesign of Search and Inference for the Linguistic Website Terraling
Author(s): Vasandani, Shailesh; Hannan Butt; Dennis Shasha
Abstract:
The linguistics web application Terraling serves many useful
functions for linguists. By extracting the critical path for linguistic
analysis into microservices, we are able to improve user experience,
optimize performance, and increase maintainability.
By using a modern stack with a React frontend and a Golang backend,
performance was improved by 700 times. In addition, new features can
be added with high velocity. The website can be accessed on any device
on the Terraling website. -
Ph.D. Thesis
2021
Order and Learning in Sequential Neural Structured Prediction
Welleck, Sean
Abstract
|
PDF
Title: Order and Learning in Sequential Neural Structured Prediction
Candidate: Welleck, Sean
Advisor(s): Kyunghyun Cho
Abstract:
Structured objects such as sets, trees, and sequences appear in a variety of scientific and industrial domains. Developing machine learning methods that generate these objects is of interest for both scientific understanding and practical applications. One approach, sequential neural structured prediction, decomposes generation into a sequence of predictions, with each prediction made by a deep neural network. Choosing an appropriate sequential representation of each structured object and selecting an effective learning objective are key to adopting this approach. The standard method for learning specifies a canonical ordering of elements in the sequential representation and maximizes the likelihood of the resulting sequences. We develop two streams of research that explore alternatives to this fixed-order, maximum likelihood approach for sequentially generating sets, trees, and sequences, with a focus on natural language processing applications.
First, we focus on text generation and study degenerate properties of fixed-order maximum-likelihood learning, motivating new learning methods. We characterize the degeneracy using three properties observed in generated text: non-termination, logical incoherence, and repetition. To study non-termination, we develop theory that allows us to prove that conventional text generation methods can generate infinite-length sequences with high probability. To study logical incoherence, we create a dataset for investigating the degree to which a model logically contradicts its preceding statements. For reducing degeneration, we develop unlikelihood training, a learning method which penalizes task-specific textual properties. In the second part of the thesis, we remove the requirement of a fixed generation order with a learning framework called non-monotonic generation, which yields models that select input-dependent generation orders. We use non-monotonic generation to generate multisets, parse trees, and text. The investigations and techniques presented in this thesis lead to promising directions for future work. -
Ph.D. Thesis
2021
Techniques for Sample-Efficient Reinforcement Learning
Whitney, William
Abstract
|
PDF
Title: Techniques for Sample-Efficient Reinforcement Learning
Candidate: Whitney, William
Advisor(s): Kyunghyun Cho
Abstract:
By leveraging advances in deep learning, reinforcement learning (RL) has recently made such advances that for any task which has a simulator, and thus enables the collection of nearly unlimited data, it might now be expected to yield superhuman performance. However, many practically relevant tasks take place in the physical world. Constructing physical simulators of sufficient fidelity and correspondence to transfer is a non-trivial challenge, so for the majority of physical tasks at least some amount of training on real data is required. Collecting data in the real world is sufficiently expensive that it makes up much of the cost of training a reinforcement learning agent.
This thesis focuses on improving the sample efficiency of reinforcement learning in order to make them more practical to use on physical systems. It includes three approaches to this goal. The first part studies the data collection process, and in particular the opportunity for exploration to improve the sample efficiency of RL. The second part considers the use of representation learning to improve generalization, and thus sample efficiency, in reinforcement learning. The third part examines the offline RL setting, which consists of pure policy optimization using a fixed dataset and therefore does not require additional data collection.
Taken together, this work studies techniques for improving the sample efficiency of reinforcement learning by collecting data which is more useful and diverse, then learning more from every sample.
It represents an early step on the path to RL as an everyday tool for control of physical systems. -
Ph.D. Thesis
2021
Methods to Improve Knowledge Transfer Efficiency for Data-limited Problems in Genomics
Yi, Ren
Abstract
|
PDF
Title: Methods to Improve Knowledge Transfer Efficiency for Data-limited Problems in Genomics
Candidate: Yi, Ren
Advisor(s): Richard Bonneau
Abstract:
The recent advancement in computational genomics has greatly benefited from the explosion of high-throughput genomic data and similar growth in biological databases. However, as more sequencing technologies become available and large genomic consortiums start to crowdsource data from larger cohorts of research groups, data heterogeneity has become an increasingly prominent issue. Data integration across multiple data sources and data modalities becomes particularly important for a greater number of biological systems. High-throughput omics data are typically highly skewed towards a small number of model organisms, factors, and conditions with which wet-lab experiments have higher success rates. It further introduces technical challenges when building machine learning models for problems with limited data. This thesis describes methods that improve knowledge transfer efficiency for learning data-limited problems through effective task-specific feature representation in the multitask learning setting. We demonstrate the performance of our methods in two genomic problems -- genetic variant calling and cell type-specific transcription factor binding predictions.
-
Ph.D. Thesis
2020
Out of Distribution Generalization in Machine Learning
Arjovsky, Martin
Abstract
|
PDF
Title: Out of Distribution Generalization in Machine Learning
Candidate: Arjovsky, Martin
Advisor(s): Leon Bottou
Abstract:
Machine learning has achieved tremendous success in a variety of
domains in recent years. However, a lot of these success stories have
been in places where the training and the testing distributions are
extremely similar to each other. In everyday situations when models
are tested in slightly different data than it was trained on, ML
algorithms can fail spectacularly. This research attempts to formally
define this problem, what sets of assumptions are reasonable to make
in our data and what kind of guarantees we hope to obtain from them.
Then, we focus on a certain class of out of distribution problems,
their assumptions, and introduce simple algorithms that follow from
these assumptions, and that are able to provide more reliable
generalization. A central topic in the thesis is the strong link
between discovering the causal structure of the data, finding features
that are reliable (when using them to predict) regardless of their
context, and out of distribution generalization. -
Ph.D. Thesis
2020
Behavior of the Limited-Memory BFGS Method on Nonsmooth Optimization Problems
Asl, Azam
Abstract
|
PDF
Title: Behavior of the Limited-Memory BFGS Method on Nonsmooth Optimization Problems
Candidate: Asl, Azam
Advisor(s): Michael Overton
Abstract:
The limited memory BFGS (Broyden-Fletcher-Goldfarb-Shanno) method,
abbreviated L-BFGS, is widely used for large-scale unconstrained optimization, but its behavior on nonsmooth problems has received little attention. In this thesis we give the first convergence analysis of the L-BFGS method applied to nonsmooth functions. We focus on the simplest version of the method, sometimes known as memoryless BFGS, which uses just one update. L-BFGS can be used with or without “scaling”; the use of scaling is normally recommended. We consider a simple class of convex piecewise linear nonsmooth functions f that are unbounded below. On this class of problems, we show that memoryless BFGS with scaling, using any ArmijoWolfe line search and initialized at any point where f is differentiable, generates iterates that converge to a non-optimal point, if a certain condition
relating the Lipschitz constant of f to the line search Armijo parameter holds. We also present an analysis of the ordinary gradient method with the same line search applied to the same class of functions, giving conditions under which it also fails. However, scaled memoryless BFGS fails under a weaker condition relating the Lipschitz constant of the function to the line search Armijo parameter than that implying failure of the gradient method. Furthermore, in sharp contrast to the gradient method, if a specific standard Armijo-Wolfe bracketing line search is used, scaled memoryless BFGS fails if the Lipschitz constant is sufficiently large regardless of the Armijo
parameter. Our experimental results demonstrate that our analysis is tight on this class of functions, and that similar results likely hold for L-BFGS with any fixed number of updates. In contrast, the “full” BFGS method is remarkably effective for minimizing nonsmooth functions, but it is not a practical approach when the number of variables is large.
We also conduct extensive experiments applying L-BFGS, both unscaled, with various choices for the number of updates, on many other classes of convex nonsmooth functions, ranging from artificially devised, highly ill-conditioned nonsmooth problems to eigenvalue optimization problems that are equivalent to semidefinite programming problems arising from applications. We also apply L-BFGS to smoothed versions of these problems. We find that although L-BFGS is usually a reliable method for minimizing ill-conditioned smooth problems, when the condition number is so large that the function is effectively nonsmooth, L-BFGS consistently fails. This behavior is in sharp contrast to the behavior of full BFGS, which is consistently reliable for nonsmooth optimization problems. We arrive at the conclusion that, for large-scale nonsmooth optimization problems for which BFGS and other methods are not practical, it is far preferable to apply L-BFGS to a smoothed variant of a nonsmooth problem than to apply it directly to the nonsmooth problem. -
M.S. Thesis
2020
Cooperation and Deception in multi-agent signaling
Enaganti, Inavamsi
Abstract
|
PDF
Title: Cooperation and Deception in multi-agent signaling
Candidate: Enaganti, Inavamsi
Advisor(s): Bhubaneshwar Mishra
Abstract:
We aim to study cooperation and deception in a system with multiple agents through utility and signaling. We start with the classic standard for cooperation namely the ‘Prisoner’s Dilemma’ and then we move on to the ‘Iterated Prisoner’s Dilemma’ which we treat as an iterated version of a signaling Game. This is because the previous actions of an agent are a signal to the opponent about the agent’s type. We then move on to Bio-mimicry and deception where we study dynamics and interesting phenomena that arise due to signaling between Predator and Prey. Cooperation and deception are two sides of a coin and it is imperative to understand both of them as we develop better and more efficient Artificial Intelligence systems.
-
Ph.D. Thesis
2020
Enhanced Representations for Relations by Multi-task Learning
Fu, Lisheng
Abstract
|
PDF
Title: Enhanced Representations for Relations by Multi-task Learning
Candidate: Fu, Lisheng
Advisor(s): Grishman, Ralph
Abstract:
A relation describes the relationship between a pair of entities. Relation Extraction is the process of extracting relations from free text and converting them to structured machine-readable knowledge. This process can facilitate building and extending knowledge bases, and therefore can benefit a variety of natural language processing applications such as Question Answering and Summarization.
Typical relation extraction projects start by defining a relation schema: a set of mutually-exclusive relation types. Based on these definitions, all instances of these relations in a text corpus are labeled by hand, producing a dataset which can be used to train a statistical model. Labeling relations in text is difficult and time-consuming. There only exist limited relation datasets developed in this way. New applications will give rise to new schemas, so the lack of high-quality labeled data is almost inevitable for Relation Extraction.
Despite limited labeled samples in relation datasets, neural net models have been shown to be more effective than traditional methods in learning feature representations with pre-trained word embeddings. In the context of representation learning, this thesis presents multi-task learning frameworks to learn enhanced representations for relations. It shows how to learn better feature representations in both unsupervised and supervised ways. First, the dissertation shows how to learn domain invariant representations using unlabeled entity pairs. Then it shows how to learn a unified encoder by combining multiple annotated datasets. Finally, it shows how to learn the relatedness between relation types across different relation schemas. These techniques improve the relation models without requiring more annotation from the target dataset. The multi-task learning frameworks could be an efficient toolkit for relation extraction in general.
-
Ph.D. Thesis
2020
Scaling Multi-user Virtual and Augmented Reality
Herscher, Sebastian
Abstract
|
PDF
Title: Scaling Multi-user Virtual and Augmented Reality
Candidate: Herscher, Sebastian
Advisor(s): Perlin, Ken
Abstract:
The Virtual and Augmented Reality (XR) ecosystems have been gaining substantial momentum and traction within the gaming, entertainment, enterprise, and training markets in the past half-decade, but have been hampered by limitations in concurrent user count, throughput, and accessibility to mass audiences. Although a litany of XR devices have been made available for public purchase, most XR experiences have been developed for either a single user or a small set of users at a time. Few systems or experiments in co-located XR environments have expanded past a small set of users, leaving the paradigm of being part of a larger virtual audience relatively untested. This thesis presents a set of components, systems, and experiments that assist in the creation, deployment, and scaling of multi-user virtual and augmented reality experiences, and outlines the strengths of techniques found in traditional co-located media for the design space of scaled co-located XR.
-
M.S. Thesis
2020
Static Responsibility Analysis of Floating-Point Programs
Saatcioglu, Goktug
Abstract
|
PDF
Title: Static Responsibility Analysis of Floating-Point Programs
Candidate: Saatcioglu, Goktug
Advisor(s): Thomas Wies
Abstract:
The last decade has seen considerable progress in the analysis of floating-point programs. There now exist frameworks to verify both the total amount of round-off error a program accrues and the robustness of floating-point programs. However, there is a lack of static analysis frameworks to identify causes of erroneous behaviors due to the use of floating-point arithmetic. Such errors are both sporadic and triggered by specific inputs or numbers computed by programs. In this work, we introduce a new static analysis by abstract interpretation to define and detect responsible entities for such behaviors in finite precision implementations. Our focus is on identifying causes of test discontinuity where small differences in inputs may lead to large differences in the control flow of programs causing the computed finite precision path to differ from the same ideal computation carried out in real numbers. However, the analysis is not limited to just discontinuity, as any type of error cause can be identified by the framework. We propose to carry out the analysis by a combination of over-approximating forward partitioning semantics and under-approximating backward semantics of programs, which leads to a forward-backward static analysis with iterated intermediate reduction. This gives a way to the design of a tool for helping programmers identify and fix numerical bugs in their programs due to the use of finite-precision numbers. The implementation of this tool is the next step for this work. -
TR2020-995
2020
FireRevit: Using Revit Files to Identify the Room Locations of Fires and Escape Routes
Sheng, Luhan;
Dennis Shasha
Abstract
|
PDF
Title: FireRevit: Using Revit Files to Identify the Room Locations of Fires and Escape Routes
Author(s): Sheng, Luhan; Dennis Shasha
Abstract:
A Revit file is a proprietary format used by Autodesk Revit to store a building model.
It contains all the information that describes a building model, such as element and
entity data, project location, etc\cite{RN17}. Since 2010, to enable advanced users and
third-party developers to integrate their applications into the Autodesk Revit family of
products, Autodesk has permitted developers to use the API provided by Revit to obtain
building data\citep{RN14}. In fact, one can now process large quantities of Revit files
and extract building information automatically\cite{RN16}.Based on this, FireRevit consists of a parser for all the building model files in a given
city to get the location of any window in any building and their corresponding rooms, and
create a database to persist the data.In this way, when a fire breaks out in the city and a drone sighting of the fire gives
latitude,longitude and height, FireRevit can help firefighters determine the building and
room where the fire occurred by retrieving records from the database.FireRevit also combines the Revit file and information about which rooms are inaccessible
due to fire to guide residents to the nearest exit. -
M.S. Thesis
2020
Pointer-Generator Transformers for Morphological Inflection
Singer, Assaf
Abstract
|
PDF
Title: Pointer-Generator Transformers for Morphological Inflection
Candidate: Singer, Assaf
Advisor(s): Kyunghyun Cho
Abstract:
In morphologically rich languages, a word's surface form reflects syntactic and semantic properties such as gender, tense or number. For example, most English nouns have both singular and plural forms (e.g., robot/robots, process/processes), which are known as the inflected forms of the noun. The vocabularies of morphologically rich languages, e.g., German or Spanish, are larger than those of morphologically poor languages, e.g., Chinese, if every surface form is considered an independent token. This motivates the development of models that can deal with inflections by either analyzing or generating them and, thus, alleviate the sparsity problem.
This thesis presents approaches to generate morphological inflections. We cast morphological inflection as a sequence-to-sequence problem and apply different versions of the transformer, a state-of-the art deep learning model, to the task. However, for many languages, the availability of morphological lexicons, and, thus, training data for the task, is a big challenge. In our work, we explore different ways to overcome this: 1. We propose a pointer-generator transformer model to allow easy copying of input characters, which is known to improve performance of neural models in the low-resource setting. 2. We implement a system for the task of unsupervised morphological paradigm completion, where systems produce inflections from raw text alone, without relying on morphological information. 3. We explore multitask training and data hallucination pretraining, two methods
which yield more training examples.With our formulated models and data augmentation methods, we participate in the SIGMORPHON 2020 shared task, and describe the NYU-CUBoulder systems for Task 0 on typologically diverse morphological inflection and Task 2 on unsupervised morphological paradigm completion. Finally, we design a low-resource experiment to show the effectiveness of our proposed approaches for low-resource languages.
-
M.S. Thesis
2020
Data Flow Refinement Type Inference Tool Drift²
Su, Yusen
Abstract
|
PDF
Title: Data Flow Refinement Type Inference Tool Drift²
Candidate: Su, Yusen
Advisor(s): Thomas Wies
Abstract:
Refinement types utilize logical predicate for capturing run-time properties of programs which can be used for program verification. Traditionally, SMT-based checking tools of refinement types such as the implementation of Liquid Types [1] require either heuristics or random sampling logical qualifiers to find the relevant logical predicates.
In this thesis, we describe the implementation of a novel algorithm proposed in Zvonimir Pavlinovic’s PhD thesis "Leveraging Program Analysis for Type Inference" [2], based on the framework of abstract interpretation for inferring refinement types in functional programs. The analysis generalizes Liquid type inference and is parametric with the abstract domain used to express type refinements. The main contribution of this thesis is to achieve the process of instantiating our parametric type analysis and to evaluate the algorithm’s precision and efficiency. Moreover, we describe a tool, called DRIFT², which allows users to select an abstract domain for expressing type refinements and to control the degree to which context-sensitive information is being tracked by the analysis.
Finally, our work compares the precision and efficiency of DRIFT² for different configurations of numerical abstract domains and widening operations [3]. In addition, we compare DRIFT² with existing refinement type inference tools. The experimental results show that our method is both effective and efficient in automatically inferring refinement types. -
Ph.D. Thesis
2020
Auditing Outsourced Services
Tan, Cheng
Abstract
|
PDF
Title: Auditing Outsourced Services
Candidate: Tan, Cheng
Advisor(s): Michael Walfish
Abstract:
Outsourcing to the cloud is based on assuming that remote servers behave as expected, even under failures, bugs, misconfigurations, operational mistakes, insider threats, and external attacks. Can we instead verify their behavior? There have been various attempts at such verification, but these attempts have had to choose: comprehensive guarantees or good performance? This dissertation studies how to get both.
This dissertation focuses on two essential services: outsourced computation and outsourced databases. Verifying them correspondingly introduces two new abstract problems. We call the first problem the Efficient Server Audit Problem, which examines how to efficiently verify a concurrent and untrusted server. The second problem is verifying a core correctness contract of black-box databases while scaling to real-world online workloads.
To address the two problems, this dissertation respectively introduces two systems: orochi and cobra. Both systems tolerate arbitrary failures in the service provider, and have good performance: in our experiments, orochi’s verifier achieves 5.6–10.9x speedup versus simply re-executing inputs, with less than 10% CPU overhead on the server side; cobra improves over baselines by 10x in verification cost, with modest overhead on clients (less than 5% throughput degradation and about 7% 90-percentile latency increases). -
Ph.D. Thesis
2020
Market Efficiency and Dynamics
Tao, Yixin
Abstract
|
PDF
Title: Market Efficiency and Dynamics
Candidate: Tao, Yixin
Advisor(s): Richard Cole
Abstract:
General equilibrium theory, initiated by Walras over a century ago, explains the interaction between supply and demand in an economy. In this dissertation, we look at Fisher Markets, which are a particular case of the general equilibrium theory. We consider two issues in Fisher Markets: strategic behavior and dynamics.
Strategic behavior is usually considered in a game, such as auction, in which case, participants in the game may choose not to report their real preferences in order to improve their payoff. In general equilibrium theory, buyers are usually considered to be non-strategic: given the prices, buyers will maximize their true utilities by properly distributing their money on different goods. In this case, the Market equilibrium should be efficient. However, the prices in the market equilibrium are influenced by the demands of the buyers. In principle, buyers can affect prices by changing their demands, which may improve buyers' final utilities. This may result in inefficient outcomes. In this thesis, we investigate this possibility in large Fisher markets. We show that the market will approach full efficiency as the market becomes larger and larger. We also show a similar result for the Walrasian mechanism in large settings.
We also study two dynamics in Fisher Markets in this dissertation:- Proportional response is a buyer-oriented dynamics. Each round, buyers update their spending in proportion to the utilities they received in the last round, where prices are the sum of the spendings. This dissertation establishes new convergence results for two generalizations of proportional response in Fisher markets with buyers having CES utility functions. The starting points are respectively a new convex and a new convex-concave formulation of such markets. The two generalizations of proportional response correspond to suitable mirror descent algorithms applied to these formulations. Among other results, we analyze a damped generalized proportional response and show a linear rate of convergence in a Fisher market with buyers whose utility functions cover the full spectrum of CES utilities aside the extremes of linear and Leontief utilities; when these utilities are included, we obtain an empirical O(1 / T) rate of convergence.
- Tatonnement is considered the most natural dynamics in Fisher Markets: the price of a good is raised if the demand exceeds the supply of the good, and decreased if it is too small. Implicitly, buyers' demands are assumed to be a best-response to the current prices. This dissertation addresses a lack of robustness in existing convergence results for discrete forms of tatonnement, including the fact that it need not converge when buyers have linear utility functions. This dissertation shows that for Fisher markets with buyers having CES utility functions, including linear utility functions, tatonnement will converge quickly to an approximate equilibrium (i.e., at a linear rate), modulo a suitable large market assumption. The quality of the approximation is a function of the parameters of the large market assumption.
-
Ph.D. Thesis
2020
Flexible and Efficient Systems for Training Emerging Deep Neural Networks
Wang, Minjie
Abstract
|
PDF
Title: Flexible and Efficient Systems for Training Emerging Deep Neural Networks
Candidate: Wang, Minjie
Advisor(s): Li, Jinyang
Abstract:
The success of deep neural networks (DNNs) is due to its strong capability to learn from data. To leverage more data requires larger models that may exceed the capacity of a single computing device. To leverage graph structured data demands models of sparse computation pattern. Unfortunately, current deep learning systems limit the exploration of such models, causing disturbing user experience. This thesis proposes a system design to guide the development of new deep learning systems. The goal of this design is to enable efficient training of these emerging DNNs with little user effort.
We then realize the design in two systems, Tofu and DGL. Tofu partitions very large DNNs across multiple GPUs to reduce per-GPU memory footprint. To automatically partition each operator, we propose a description language for annotating the semantics of an operator. To optimally partition the whole training, Tofu proposes an algorithm that minimizes the total communication cost. We evaluate and assess the capability of Tofu to train very models demonstrating the substantial gains by applying the design. We then implement DGL, a new framework for training DNNs for graph structured data. DGL provides an intuitive and expressive interface that can cover a wide range of graph DNN models. We introduce batching and kernel fusion techniques that enable training GNNs on large graphs and achieve significant improvements in performance relative to existing systems.
-
M.S. Thesis
2020
Are the proposed similarity metrics also a measure of functional similarity?
Yellapragada, Manikanta Srikar
Abstract
|
PDF
Title: Are the proposed similarity metrics also a measure of functional similarity?
Candidate: Yellapragada, Manikanta Srikar
Advisor(s): Kyunghyun Cho
Abstract:
A recent body of work attempts to understand the behavior and training dynamics of neural networks by analyzing intermediate representations and designing metrics to defi ne the similarity between those representations. We observe that the representations of the last layer could be thought of as the functional output of the model up to that point. In this work, we investigate if the similarity between these representations can be considered a stand-in for the similarity of the networks' output functions. This can have an impact for many downstream tasks, but we specifically analyze it in the context of transfer learning. Consequently, we perform a series of experiments to understand the relationship between the representational similarity and the functional similarity of neural networks. We show in two ways that the leading metric for representational similarity, CKA, does not bear a strict relationship with functional similarity.
-
Ph.D. Thesis
2019
From 2.5G To 5G: Enhancing Access And Performance For Mobile Users
Ahmad, Talal
Abstract
|
PDF
Title: From 2.5G To 5G: Enhancing Access And Performance For Mobile Users
Candidate: Ahmad, Talal
Advisor(s): Subramanian, Lakshminarayanan
Abstract:
This dissertation has two overarching themes: i) enhancing connectivity access for mobile users in rural contexts and ii) enhancing transport layer performance for mobile users.
More than half of the world’s population faces barriers in accessing the Internet. A recent ITU study estimates that 2.6 billion people cannot afford connectivity and that 3.8 billion do not have access to it. To enhance access I have worked on two projects: Wi-Fly and GreenApps. Wi-Fly is a new connectivity paradigm designed for regions without Internet coverage that enables communication between a lightweight Wi-Fi device on commercial planes and ground stations. Through empirical experiments with test flights and simulation, we show that Wi-Fly and its extensions have the potential to provide connectivity in the most remote regions of the world. In GreenApps, we look at how localized cellular applications can be built for rural communities on top of software-defined cellular base stations. We deployed the GreenApps platform on rural base stations for communities in Ghana and Nicaragua and supported multiple localized applications for rural communities.
Enhancing transport layer performance over cellular networks is critical to improve end-to-end application performance for mobile users. Cellular networks have unique challenges that make conventional transport protocols not suitable for these environments. In the past few years, several new delay-based congestion-control algorithms have been developed with complex nonlinear control loops for cellular contexts. While these protocols have shown promise, it has been extremely challenging to analyze and interpret the behavior of these algorithms especially under highly variable network conditions (e.g., cellular links). In the Model-Driven Interpretable (MDI) congestion control work, we provide a model-driven framework to reason about the behavior of such congestion control algorithms. Our modeling approach simplifies a congestion control algorithm’s behavior into a guided random walk over a two-dimensional Markov model. We show that the model of a congestion-control algorithm can give key insights into its convergence and performance. More recently, we also looked at how to learn early signals of congestion in highly varying 5G channels. In particular we worked with Wi-Gig traces collected at 60 GHz and showed that it is possible to learn highly accurate early congestion signals using delay features observed at end-hosts.
-
M.S. Thesis
2019
End-to-End Hierarchical Clustering with Graph Neural Networks
Choma, Nicholas
Abstract
|
PDF
Title: End-to-End Hierarchical Clustering with Graph Neural Networks
Candidate: Choma, Nicholas
Advisor(s): Bruna, Joan
Abstract:
The objective of this thesis is to develop a data-driven, hierarchical clustering method which is capable of operating on large point cloud datasets, necessitating a runtime which is sub-quadratic. Hierarchical clustering is noteworthy for its ability to produce multiscale views of data, allowing for rich and interpretable representations, and for its ability to cluster when the number of clusters is not specified a priori. To date, deep learning methods for clustering have primarily focused on a narrower class of models which cluster using partitioning strategies and require as input the number of clusters to produce. In this work, we introduce the clustering graph neural network, extending previous research into graph neural networks to handle large clustering tasks where the number of clusters is variable and not pre-specified. Our architecture is fast, operating with O(n log n) time complexity, and we note its amenability to high levels of parallelization. Because each stage is differentiable, we emphasize that our architecture is capable of end-to-end training, leveraging signal throughout the learning pipeline as part of a multi-objective loss function. Finally, we demonstrate the clustering graph neural network on a challenging particle tracking task, which, while unable to outperform highly-tuned and domain-specific baselines, nevertheless achieves high performance while remaining flexible to a wide array of clustering tasks.
-
Ph.D. Thesis
2019
Co-Located Augmented and Virtual Reality Systems
DeFanti, Connor
Abstract
|
PDF
Title: Co-Located Augmented and Virtual Reality Systems
Candidate: DeFanti, Connor
Advisor(s): Perlin, Ken
Abstract:
Augmented and Virtual Reality (AVR) systems have become increasingly popular in the worlds of entertainment and industry. However, many current systems are limited in scope to experiences that isolate a single user within a given physical space. While many such experiences allow for interactions between remotely located users, very few experiences allow for multiple users to coexist in the same physical space while interacting with a consistent world-view of shared virtual objects. Our research has found that by enabling this co-located paradigm, users are able to have rich interactions that are otherwise impossible. This thesis presents a series of experiments that demonstrate the importance of the social aspects of co-located AVR, a set of solutions that overcome the difficulties often encountered in such experiences, and directions for future scalability using forthcoming hardware and technologies.
-
TR2019-993
2019
Vertex-Based Preconditioners for the Coarse Problems of BDDC
Dohrmann, Clark R.;
Pierson, Kendall H.; Widlund, Olof B.
Abstract
|
PDF
Title: Vertex-Based Preconditioners for the Coarse Problems of BDDC
Author(s): Dohrmann, Clark R.; Pierson, Kendall H.; Widlund, Olof B.
Abstract:
We present a family of approximate BDDC preconditioners based on inexact solvers for the coarse problem. The basic idea is to replace the direct solver for a standard BDDC coarse problem by a preconditioner which requires much less computation and memory. The focus in this study is on scalar elliptic and linear elasticity problems in three dimensions. The preconditioner for the coarse problem employs a standard two-level additive Schwarz approach in which the coarse problem dimension is either one or six times the number of subdomain vertices. We show, under certain assumptions on the coefficients, that favorable BDDC condition number estimates also hold for the approximate preconditioners. Numerical examples are presented to confirm the theory and to demonstrate the computational advantages of the approach.
-
Ph.D. Thesis
2019
Design for Customized Manufacturing
Gil-Ureta, Francisca T.
Abstract
|
PDF
Title: Design for Customized Manufacturing
Candidate: Gil-Ureta, Francisca T.
Advisor(s): Denis Zorin
Abstract:
Over the past few years, 3D printing technology has captivated business and consumers alike with its promise of affordable custom manufacturing. The expectation is, in the future, people will be able to easily customize and manufacture objects to fit individual needs. To make this a reality, we need new methods that support the creative process of makers, from conception to fabrication.
In this thesis, I present three projects where we reexamine the tools and workflows used for customized design. The core idea behind these projects is that, compared with traditional methods, we design for an unknown or changeable manufacturing process, which affects the life-cycles of design. Our goal is to create tools that simplify the modification, optimization, and evaluation of designs such that they can be easily altered to fit manufacturing and personal constraints.
Although fabrication constraints are unlimited, we can study specific domains to learn the most common ones. In the first project, we present an interactive modeling tool for designing mechanical objects, which are determined mostly by kinematic constraints. In the second project, we study the structural efficiency of shells and introduce an efficient method for designing shell reinforcements of minimal weight. Finally, in the third project, we develop a robust collision resolution algorithm, crucial for the design and optimization of
models subject to dynamic impulses. -
M.S. Thesis
2019
On Zero-Shot Transfer Learning for Event Extraction
Haroon, Shaheer
Abstract
|
PDF
Title: On Zero-Shot Transfer Learning for Event Extraction
Candidate: Haroon, Shaheer
Advisor(s): Grishman, Ralph
Abstract:
Event extraction normally requires large amounts of annotated data for each event type. Each event consist of trigger words and arguments that fulfill certain roles. This limits the ability to add new event types to an existing ontology or when building a new one because of the massive effort involved for manually annotating a corpus. Recent methods have proposed using zero-shot transfer learning to minimize the amount of annotated data required for a classifier to predict new event types. The zero-shot classifier relies on several components, including a preexisting event ontology to be successful. Our goal was to explore factors including choice of role names, event type names, and definitions of event mention and event type structures that could influence the results of a zero-shot classifier. We found that the use of paradigmatic role names and characteristic event type names in an event ontology especially have significant impact on the success of the classifier. As a result, there is still a decent amount of effort required when adding new event types to an ontology in order to promote the success of a zero-shot approach.
-
Ph.D. Thesis
2019
Scalable Machine Learning using Dataflow Graph Analysis
Huang, Chien-Chin
Abstract
|
PDF
Title: Scalable Machine Learning using Dataflow Graph Analysis
Candidate: Huang, Chien-Chin
Advisor(s): Li, Jinyang
Abstract:
In the past decade, the abundance of computing resources and the growth of data boost the development of machine learning applications. Many computation frameworks, e.g., Hadoop, Spark, TensorFlow, and PyTorch, have been proposed and become widely used in the industry. However, programming large-scale machine learning applications is still challenging and requires the manual efforts of developers to achieve good performance.
For example, when parallelizing arrays to hundreds of CPU machines, it is critical to choose a good partition strategy to co-locate the computation arrays to reduce network communication. Unfortunately, existing distributed array frameworks usually use a default partition scheme and requires manually partitioning if another parallel strategy is used, making it less easy to develop a distributed array program. Another example is running deep learning applications with GPU. Modern GPU can be orders of magnitude faster than CPU and becomes an attractive computation resource. Unfortunately, the limited memory size of GPU restricts the scale of the DNN models can be run. It is desired to have a computation framework to allow users to explore deeper and wider DNN models.
Modern distributed frameworks generally adopt a dataflow-style programming paradigm. The dataflow graph of an application exposes valuable information to optimize the application. In this thesis, we present two techniques to address the above issues via dataflow graph analysis.
We first design Spartan to help users parallelize distributed arrays on a CPU cluster. Spartan is a distributed array framework, built on top of a set of higher-order dataflow operators. Based on the operators, Spartan provides a collection of Numpy-like array APIs. Developers can choose the built-in array APIs or directly use the operators to construct machine learning applications. To achieve good performance for the distributed application, Spartan analyzes the communication pattern of the dataflow graph captured through the operators and applies a greedy strategy to find a good partition scheme to minimize the communication cost.
To support memory-intensive deep learning applications on a single GPU, we develop SwapAdvisor, a swapping system that automatically swaps temporarily unused tensors from GPU memory to CPU memory. To minimize the communication overhead, SwapAdvisor analyzes the dataflow graph of the given DNN model and uses a custom-designed genetic algorithm to optimize the operator scheduling and memory allocation. Based on the optimized operator schedule and memory allocation, SwapAdvisor can determine what and when to swap to achieve a good performance.
-
M.S. Thesis
2019
Leveraging Communication for Efficient Sampling
Kapoor, Sanyam
Abstract
|
PDF
Title: Leveraging Communication for Efficient Sampling
Candidate: Kapoor, Sanyam
Advisor(s): Bruna, Joan
Abstract:
Machine Learning has shown promising success tasks like classification, regression and more recently generation. However, long-term planning still remains a challenge for real-world deployment and one of the key components of long-term planning is exploration. In this work, we discuss how communication can be leveraged to improve space exploration. We study this problem from the perspective of sampling from un-normalized density functions.
Hamiltonian Monte Carlo (HMC) finds it improbable to sample from highly separated multi- modal distributions and parallel chains can be wasteful by the nature of Markov chain sampling. We see how replica exchange induces a weak for of communication. This is contrasted with a particle based approach called the Stein Variational Gradient Descent (SVGD) which induces a stronger form of communication via kernel evaluations. The quality of samples from both HMC and SVGD are evaluated with Maximum Mean Discrepancy. We finally propose Graph Neural Networks with stronger inductive biases to amortize the dynamics of SVGD for fast generation of representative samples.
-
Ph.D. Thesis
2019
Compositional Abstractions for Verifying Concurrent Data Structures
Krishna, Siddharth
Abstract
|
PDF
Title: Compositional Abstractions for Verifying Concurrent Data Structures
Candidate: Krishna, Siddharth
Advisor(s): Thomas Wies
Abstract:
Formal verification has had great success in improving the reliability of real-world software, with projects such as ASTREE, CompCert, and Infer showing that rigorous mathematical analysis can handle the scale of today's cyber-infrastructure. However, despite these successes, many core software components are yet to be verified formally. Concurrent data structures are a class of algorithms that are becoming ubiquitous, as software systems seek to make use of the increasingly parallel design of computers and servers. These data structures use sophisticated algorithms to perform fine-grained synchronization between threads, making them notoriously difficult to design correctly, with bugs being found both in actual implementations and in the designs proposed by experts in peer-reviewed publications. The rapid development and deployment of these concurrent algorithms has resulted in a rift between the algorithms that can be verified by the state-of-the-art techniques and those being developed and used today. The goal of this dissertation is to bridge this gap and bring the certified safety of formal verification to the concurrent data structures used in practice.
Permission-based program logics such as separation logic have been established as the standard technique for verifying programs that manipulate complex heap-based data structures. These logics build on so-called separation algebras, which allow expressing properties of heap regions such that modifications to a region do not invalidate properties stated about the remainder of the heap. This concept is key to enabling modular reasoning and also extends to concurrency. However, certain data structure idioms prevalent in real-world programs, especially concurrent programs, are notoriously difficult to reason about, even in these advanced logics (e.g., random access into inductively defined structures, data structure overlays). The underlying issue is that while heaps are naturally related to mathematical graphs, many ubiquitous graph properties are non-local in character. Examples of such properties include reachability between nodes, path lengths, acyclicity and other structural invariants, as well as data invariants which combine with these notions. Reasoning modularly about such global graph properties remains a hard problem, since a local modification can have side-effects on a global property that cannot be easily confined to a small region.
This dissertation addresses the question: What separation algebra can be used to prove that a program maintains a global graph property by reasoning only about the local region modified by the program? We propose a general class of global graph properties, that we call flows, that can be expressed as fixpoints of algebraic equations over graphs. Flows can encode structural properties of the heap (e.g. the reachable nodes from the root form a tree), data invariants (e.g. sortedness), as well as combinations of both shape and data constraints of overlaid structures in a uniform manner. We then introduce the notion of a flow interface, an abstraction of a region in the heap, which expresses the constraints and guarantees between the region and its context with respect to the flow. Under a suitable notion of composition that preserves the flow values, we show that flow interfaces form the desired separation algebra.
Building on our theory of flows, we develop the flow framework, a general proof technique for modular reasoning about global graph properties over program heaps that can be integrated with existing separation logics. We further devise a strategy for automating this technique using SMT-based verification tools. We have implemented this strategy on top of the verification tool Viper and applied it successfully to a variety of challenging benchmarks including 1) algorithms involving general graphs such as Dijkstra's algorithm and a priority inheritance protocol, 2) inductive data structures such as linked lists and B trees, 3) overlaid data structures such as the Harris list and threaded trees, and 4) OO design patterns such as Composite and Subject/Observer. We are not aware of any single other approach that can handle these examples with the same degree of simplicity or automation.
While the flow framework is applicable to any data structure, its features give rise to a new form of modular reasoning for certain concurrent data structures. Concurrent separation logics already apply modularity on multiple levels to simplify correctness proofs, decomposing them according to program structure, program state, and individual threads. Despite these advances, it remains difficult to achieve proof reuse across different data structure implementations. For the large class of concurrent search structures, we demonstrate how one can achieve further proof modularity by decoupling the proof of thread safety from the proof of structural integrity. We base our work on the template algorithms of Shasha and Goodman that dictate how threads interact but abstract from the concrete layout of nodes in memory. By using the flow framework of compositional abstractions in the separation logic Iris, we show how to prove correctness of template algorithms, and how to instantiate them to obtain multiple verified implementations. We demonstrate our approach by formalizing three concurrent search structure templates, based on link, give-up, and lock-coupling synchronization, and deriving implementations based on B-trees, hash tables, and linked lists. These case studies represent algorithms used in real-world file systems and databases, which have so far been beyond the capability of automated or mechanized state-of-the-art verification techniques. Our verification is split between the Coq proof assistant and the deductive verification tool GRASShopper in order to demonstrate that our proof technique and framework can be applied both in fully mechanized proof assistants as well as automated program verifiers. In addition, our approach reduces proof complexity and is able to achieve significant proof reuse.
-
Ph.D. Thesis
2019
Parallel Contact-Aware Algorithms for Large-Scale Direct Blood Flow Simulations
Lu, Libin
Abstract
|
PDF
Title: Parallel Contact-Aware Algorithms for Large-Scale Direct Blood Flow Simulations
Candidate: Lu, Libin
Advisor(s): Zorin, Denis
Abstract:
Experimental and theoretical evidence suggests that blood flow can be well approximated by a mixture model of a Newtonian fluid and deformable particles representing the red blood cells. We use a well-established boundary integral formulation for the problem as the foundation of our approach. This type of formulations, with a high-order spatial discretization and an implicit and adaptive time discretization, have been shown to be able to handle complex interactions between particles with high accuracy. Yet, for dense suspensions, very small time-steps or expensive implicit solves as well as a large number of discretization points are required to avoid non-physical contact and intersections between particles, lead- ing to infinite forces and numerical instability. Given the importance of vesicle flows, in this thesis we focus in efficient numerical methods for such problems: we present computationally parallel-scalable algorithms for the simulation of dense deformable vesicles in two and three dimensions both in unbounded and bounded domain.
Our method maintains the accuracy of previous methods at a significantly lower cost for dense suspensions and the time step size is independent from the volume fraction. The key idea is to ensure interference-free configuration by introducing explicit contact constraints into the system. While such constraints are unnecessary in the formulation, in the discrete form of the problem, they make it possible to eliminate catastrophic loss of accuracy by preventing contact explicitly. Experimental and theoretical evidence suggests that blood flow can be well approximated by a mixture model of a Newtonian fluid and deformable particles representing the red blood cells. We use a well-established boundary integral formulation for the problem as the foundation of our approach. This type of formulations, with a high-order spatial discretization and an implicit and adaptive time discretization, have been shown to be able to handle complex interactions between particles with high accuracy. Yet, for dense suspensions, very small time-steps or expensive implicit solves as well as a large number of discretization points are required to avoid non-physical contact and intersections between particles, lead- ing to infinite forces and numerical instability. Given the importance of vesicle flows, in this thesis we focus in efficient numerical methods for such problems: we present computationally parallel-scalable algorithms for the simulation of dense deformable vesicles in two and three dimensions both in unbounded and bounded domain.
Our method maintains the accuracy of previous methods at a significantly lower cost for dense suspensions and the time step size is independent from the volume fraction. The key idea is to ensure interference-free configuration by introducing explicit contact constraints into the system. While such constraints are unnecessary in the formulation, in the discrete form of the problem, they make it possible to eliminate catastrophic loss of accuracy by preventing contact explicitly.
Introducing contact constraints results in a significant increase in stable time- step size for locally-implicit time-stepping, and a reduction in the number of points adequate for stability. Our method permits simulations with high volume fractions; we report results with up to 60% volume fraction. We demonstrated the parallel v scaling of the algorithms on up to 35K CPU cores.
-
TR2019-994
2019
The Block FETI--DP/BDDC Preconditioners for Mixed Isogeometric Discretizations of Three-Dimensional Almost Incompressible Elasticity
Pavarino, Luca F.;
Scacchi, Simone; Widlund, Olof B.; Zampini, Stefano
Abstract
|
PDF
Title: The Block FETI--DP/BDDC Preconditioners for Mixed Isogeometric Discretizations of Three-Dimensional Almost Incompressible Elasticity
Author(s): Pavarino, Luca F.; Scacchi, Simone; Widlund, Olof B.; Zampini, Stefano
Abstract:
A block FETI--DP/BDDC preconditioner for mixed formulations of almost incompressible elasticity are constructed and analyzed; FETI--DP (dual primal finite element tearing and interconnection) and BDDC (balancing domain decomposition by constraints) are two very successful domain decomposition algorithms for a variety of elliptic problems. The saddle point problems of the mixed problems are discretized with mixed isogeometric analysis with continuous pressure fields. As in previous work by Tu and Li (2015), for finite element discretizations of the incompressible Stokes system, the proposed preconditioner is applied to a reduced positive definite system involving only the pressure interface variable and the Lagrange multipliers of the FETI--DP algorithm. The novelty of this preconditioner consists in using BDDC with deluxe scaling for the interface pressure block as well as deluxe scaling for the FETI--DP preconditioner for the Lagrange multiplier block. A convergence rate analysis is presented with a condition number bound for the preconditioned operator which depends on the inf-sup parameter of the fully assembled problem and the condition number of a closely related BDDC algorithm for compressible elasticity. This bound is scalable in the number of subdomains, poly-logarithmic in the ratio of subdomain and element sizes, and robust with respect to material incompressibility and presence of discontinuities of the Lamé parameters across subdomain interfaces. Parallel numerical experiments validate the theory and indicate how the rate of convergence varies with respect to the spline polynomial degree and regularity and the deformation of the domain. Of particular interest is the development of variants of the algorithm with a coarse component of small dimension.
-
Ph.D. Thesis
2019
Leveraging Program Analysis for Type Inference
Pavlinovic, Zvonimir
Abstract
|
PDF
Title: Leveraging Program Analysis for Type Inference
Candidate: Pavlinovic, Zvonimir
Advisor(s): Wies, Thomas
Abstract:
Type inference is a popular feature of programming languages used to automatically guarantee the absence of certain execution errors in programs at compile time. The convenience of type inference, unfortunately, comes with a cost. Developing type inference algorithms is a challenging task that currently lacks a systematic approach. Moreover, programmers often have problems interpreting error reports produced by type inference. The overarching goal of this thesis is to provide a mathematically rigorous framework for the systematic development of sophisticated type inference algorithms that are convenient to use by the programmers. To this end, we focus on two specific problems in this thesis: (1) how to constructively design type inference algorithms that improve over the state-of-the-art and (2) how to automatically debug type errors that arise during inference. We base our approach on the observation that, similar to type inference, program analysis algorithms automatically discover various program properties that can be used to show program correctness. Type inference and program analysis techniques, although similar, have traditionally been developed independently of each other. In contrast, this thesis further explores the recent path of leveraging program analysis for type inference.
As our first contribution, we use abstract interpretation to constructively design type inference algorithms. We specifically focus on Liquid types, an advanced family of algorithms that combine classical typing disciplines and known static analyses to prove various safety properties of functional programs. By using abstract interpretation, we make the design space of Liquid type inference explicit. We also unveil the general type inference framework underlying Liquid types. By properly instantiating this general framework, one obtains novel type inference algorithms that are sound by construction.
Our second contribution is a framework for automatically debugging type errors for languages that deploy type inference in the style of Hindley-Milner, such as OCaml and Haskell. Such languages are notorious for producing cryptic type error reports that are often not helpful in fixing the actual bug. We formulate the problem of finding the root cause of type errors as an optimization problem expressed in a formal logic. We then show how to solve this problem using automated theorem provers. We experimentally illustrate how our framework can efficiently produce type error reports that outperform the state-of-the-art solutions in identifying the true cause of type errors.
In summary, this thesis introduces a mathematical framework for the systematic design of sophisticated type inference algorithms that are sound by construction. Our results further enable automatic generation of more meaningful type error diagnostics, ultimately making type inference more usable by the programmers.
-
Ph.D. Thesis
2019
Concentration and Anti-concentration for Markov Chains
Rao, Shravas
Abstract
|
PDF
Title: Concentration and Anti-concentration for Markov Chains
Candidate: Rao, Shravas
Advisor(s): Regev, Oded
Abstract:
We study tail bounds and small ball probabilities for sums of random variables obtained from a Markov chain. In particular, we consider the following sum \(S_n = f_1(Y_1)+\cdots+f_n(Y_n)\) where \(\{Y_i\}_{i=1}^{\infty}\) is a Markov chain with state space \([N]\), transition matrix \(A\), and stationary distribution \(\mu\) such that \(Y_1\) is distributed as \(\mu\), and \(f_i: [N] \rightarrow \mathbb{R}\). We also consider settings in which \(f_i(Y_i)\) is vector-valued.
In all results, the bounds are in terms of the spectral gap of the Markov chain. In almost all of the results in this thesis, when the transitions are independent and the spectral gap is \(1\), the bounds match the corresponding bounds for independent random variables up to constant factors.
We first obtain tail bounds in the case that only the \(p\)th moment of the random variable \(f_i(Y_i)\) is bounded. This is a Markov chain version of a corollary of the Marcinkiewicz–Zygmund inequality. Using this, we also obtain tail bounds for \(S_n\) when the \(f_i(Y_i)\) are elements of an \(\ell_q\) space.
Next, we obtain sharp tail bounds when the random variables \(f_i(Y_i)\) are bounded and the expected value of \(S_n\) is small. This is a Markov chain version of a Poisson approximation to sums of independent random variables. As an application, we explain how such tail bounds can be used to construct simple and explicit resilient functions that match the non-constructive functions shown to exist due to the work of Ajtai and Linial.
Next, we obtain tail bounds in the case that the \(f_i(Y_i)\) are bounded in the range \([-a_i, a_i]\) for each \(i\). This is a Markov chain version of the Hoeffding inequality. This improves upon previously known bounds in that the dependence is on \(\sqrt{a_1^2+\cdots+a_n^2}\) rather than \(\max_{i}\{a_i\}\sqrt{n}.\) Using this, we obtain tail bounds for certain types of random variables in which the \(f_i(Y_i)\) are elements of any Banach space.
Finally, we show that if the \(f_i(Y_i)\) take on values \(\{-a_i, a_i\}\) with equal probability and the \(a_i\) are Euclidean vectors with norm at least \(1\), the probability that \(S_n\) lies in a ball of volume \(1\) is small. This is a Markov chain version of the Littlewood-Offord inequality. We also construct a new pseudorandom generator for the Littlewood-Offord problem.
-
M.S. Thesis
2019
Machine Learning Applications to Protein Variant Effect Prediction
Soules, Jeffrey
Abstract
|
PDF
Title: Machine Learning Applications to Protein Variant Effect Prediction
Candidate: Soules, Jeffrey
Advisor(s): Bonneau, Richard
Abstract:
Proteins are microscopic machines whose activity forms the basis of all life processes. If a mutation causes variation in the typical amino acid sequence of a protein, the protein’s normal biological function may be compromised. Variant Interpretation and Prediction Using Rosetta (VIPUR) uses sequence and structural data features to predict whether a mutation is deleterious to the protein’s function. VIPUR was originally released with a curated set of protein variants as its training data. As released, it achieved 80% accuracy on this data set. However, the original design was tightly coupled to a logistic regression classifier, so other machine learning techniques could not be easily tested. The reimplementation of VIPUR presented in this work offers a modular design that can be extended with classifiers built on any machine learning approach. It establishes a methodologically sound basis for experimentation with new classifiers, data features, and data sets. This work examines the predictive power of the data features in the original VIPUR training set, and establishes a high baseline for classification performance based on one strongly predictive feature category. The present work includes classifier modules built with four different machine learning approaches—logistic regression, support vector machines, gradient-boosted forests, and neural networks. These represent the two model types considered in the original VIPUR work, and two more recent classifier types. The modules are trained with automated hyperparameter cross-validation and rigorously evaluated with k-fold cross validation, establishing a baseline of performance for future experiments. Results show very slight improvement over the original logistic regression method, consistent with the dominance of a small handful of features in determining classification results. Potential new data features and sources are discussed, which can be used in the new VIPUR design without modification while maintaining backwards compatibility with previously trained classifiers.
-
Ph.D. Thesis
2019
Approximation algorithms, Hardness, and PCPs
Thiruvenkatachari, Devanathan
Abstract
|
PDF
Title: Approximation algorithms, Hardness, and PCPs
Candidate: Thiruvenkatachari, Devanathan
Advisor(s): Khot, Subhash
Abstract:
This thesis is a collection of theoretical results on the topic of approximation algorithms and hardness of approximation. The results presented here use a combination of classical and modern techniques to achieve better approximation algorithms and hardness results for some pivotal NP-hard problems and their variants. We study CSPs from a multi-objective point of view, with the goal of simultaneous optimization of multiple instances over the same set of variables, with MAX-CUT as the central focus. We provide an approximation algorithm that is near optimal assuming the unique games conjecture. We also study PCPs and their role in hardness of approximation, and present a hardness result for 3-LIN in the sub-constant soundness regime. Lastly, dictatorship testing is a property testing problem with significant applications in proving hardness results, and we present an improvement on the soundness of the k-bit dictatorship test with perfect completeness.
-
Ph.D. Thesis
2019
Tactile Perception Design for Fabrication
Tymms, Chelsea
Abstract
|
PDF
Title: Tactile Perception Design for Fabrication
Candidate: Tymms, Chelsea
Advisor(s): Zorin, Denis
Abstract:
High-resolution 3D printing technology provides the ability to manufacture shapes with precise geometry. Controlling this fine-scale geometry to confer haptic qualities is a growing area of research in fabrication. In this thesis, I will present three projects addressing the question of how to fabricate surface textures with controlled tactile properties and exploring how tactile textures can be used in custom manufacturing and to expand the understanding of the human sense of touch.
Surface roughness is one of the most significant qualities in haptic perception, essential to material identification, comfort, and usability. Past perceptual studies on roughness have typically used stimuli that are existing materials or in a narrow range of custom-made materials. In the first project presented in this thesis, we explore the use of 3D printing to manufacture stimuli. We used modeling and 3D printing to manufacture a set of fine parametric bump textures, and we used these texture stimuli in a psychophysical study of human roughness perception. We investigated the contribution of the texton spacing, size, and arrangement to the texture's perceived tactile roughness.
In the second project, we quantitatively address the problem of mapping arbitrary texture geometry to tactile roughness. Drawing from insights in past neurophysiology research, we developed a model that simulates human touch to predict a texture's tactile roughness from its surface geometry. We fabricated a set of 46 parametric and real-life textures, and we used psychophysical experiments with human subjects to place them in the perceptual space for tactile roughness using non-metric multidimensional scaling. We closely match this space with our quantitative model, obtained from strain fields derived from the elasticity simulations of the human skin contacting texture geometry. We demonstrate how this model can be applied to predict and alter surface roughness, and we show several applications in the context of fabrication.
The third project extends these ideas by developing a method to control a texture's haptic qualities and visual appearance at the same time. The tactile feeling and visual appearance of objects often interact in unpredictable ways, and both serve important purposes for identification and usability. In this project, we develop an optimization method to maintain a texture's visual appearance while altering its perceived tactile roughness or tactile temperature. Our optimization method, which is enabled by neural network-based models, allows us to change a texture to a different desired tactile feeling while preserving the visual appearance, at a relatively low computational cost.
-
M.S. Thesis
2019
Cold Case: The Lost MNIST Digits
Yadav, Chhavi
Abstract
|
PDF
Title: Cold Case: The Lost MNIST Digits
Candidate: Yadav, Chhavi
Advisor(s): Fergus, Rob
Abstract:
Although the popular MNIST dataset (LeCun, Cortes, and Burges 1994) is derived from the NIST database (Grother and Hanaoka 1995), the precise processing steps for this derivation have been lost to time. We propose a reconstruction that is accurate enough to serve as a replacement for the MNIST dataset, with insignificant changes in accuracy. We trace each MNIST digit to its NIST source and its rich metadata such as writer identifier, partition identifier, etc. We also reconstruct the complete MNIST test set with 60,000 samples instead of the usual 10,000. Since the balance 50,000 were never distributed, they enable us to investigate the impact of twenty-five years of MNIST experiments on the reported testing performances. Our results unambiguously confirm the trends observed by (Recht et al. 2018; Recht et al. 2019): although the misclassification rates are slightly off, classifier ordering and model selection remain broadly reliable. We attribute this phenomenon to the pairing benefits of comparing classifiers on the same digits.
-
Ph.D. Thesis
2019
End-to-End Learning for Autonomous Driving
Zhang, Jiakai
Abstract
|
PDF
Title: End-to-End Learning for Autonomous Driving
Candidate: Zhang, Jiakai
Advisor(s): Cho, Kyunghyun
Abstract:
The end-to-end learning approach for autonomous driving has sparked great interest in both academic and industry in recent years. The approach can be defined as learning a model that maps from sensory input, such as image frames from a camera, to driving actions for controlling the autonomous vehicle such as steering. Compared to the traditional autonomous driving system, which often includes perception, localization, mapping, and path planning, the end-to-end learning approach offers a more efficient method of utilizing large amounts of expert driver demonstrations to achieve fully autonomous driving without acquiring expensive labeled data such as bounding box for objects.
The end-to-end learning for autonomous driving can be done by supervised learning, where a model is tuned to minimize the difference between predicted actions and ground-truth actions. The ground truth of driving actions is usually obtained from driver demonstrations. A model trained in this way, however, suffers from unexpected behaviors due to the mismatch between the samples visited by a learned model and the samples collected by an expert driver. To address this issue, we first introduce an end-to-end supervised learning approach with data augmentation to train a model to keep a vehicle driving at the center of a lane. The data augmentation is done by synthetically generating new samples through rotating and translating input images captured from a front-facing camera and calculating compensatory steering. We show that using such automatically-augmented data, a trained model can drive a car to follow a lane in various conditions on highways and local and residential roads.
Instead of generating augmented data, we can also collect new samples when trying out the learned model. Aiming to reduce the number of times querying an expert for labeling, we propose SafeDAgger algorithm, which is a query-efficient imitation learning approach. We show that our method significantly reduces the number of querying times and trains a driving model more efficiently. A model trained by our proposed SafeDAgger algorithm can successfully drive a racing car in a simulator to do lane following and overtaking.
The expert demonstrations provided by humans and used for training models often show significant variability due to latent factors. Given such expert demonstrations, a model trained by minimizing the difference between the expert driving actions and predicted driving actions can output dangerous driving actions that may cause serious accidents. We address this issue by introducing a variational mixture density network to model the variability using a discrete latent variable. The experimental results in a racing car simulator show that the model trained using our proposed method can learn the variability of driving signals from expert demonstrations and successfully distinguish certain driving behaviors such as changing lanes and following lanes.
We introduce a simulator to support the development, training, and evaluation of autonomous driving systems using the end-to-end learning approaches. Leveraging this simulator, we demonstrate how to train and evaluate models to drive a truck that follows a navigation map in a video game.
In summary, this thesis introduces the end-to-end learning approaches for autonomous driving to address the data mismatch issue and learn the variability of expert driving actions. Our results show that the trained model can drive the vehicle to follow a lane, change lanes and make turns in simulated driving environments.
-
Ph.D. Thesis
2019
Text Representation using Convolutional Networks
Zhang, Xiang
Abstract
|
PDF
Title: Text Representation using Convolutional Networks
Candidate: Zhang, Xiang
Advisor(s): LeCun, Yann
Abstract:
This dissertation applies convolutional networks for learning representations of text, and it consists of several parts. The first part offers an empirical exploration on the use of character-level convolutional networks (ConvNets) for text classification. We constructed several large-scale datasets to show that character-level convolutional networks could achieve state-of-the-art or competitive results. Comparisons are offered against traditional models such as bag of words, n-grams and their TFIDF variants, and deep learning models such as word-based ConvNets and recurrent neural networks. These results indicate that using low-level inputs – in this case characters – for convolutional networks could be feasible for text representation learning.
The second part concerns which text encoding method might work for convolutional networks. We include a comprehensive comparison of different encoding methods for the task of text classification using 14 large-scale datasets in 4 languages including Chinese, English, Japanese and Korean. Different encoding levels are studied, including UTF-8 bytes, characters, words, romanized characters and romanized words. For all encoding levels, whenever applicable, we provide comparisons with linear models, fastText and convolutional networks. For convolutional networks, we compare between encoding mechanisms using character glyph images, one-hot (or one-of-n) encoding, and embedding. From these 473 models, one of the conclusions is that byte-level one-hot encoding works consistently best for convolutional networks.
Based on this, in the third part of the dissertation we develop a convolutional network at the level of bytes for learning representations through the task of auto-encoding. The proposed model is a multi-stage deep convolutional encoder-decoder framework using residual connections, containing up to 160 parameterized layers. Each encoder or decoder contains a shared group of modules that consists of either pooling or up-sampling layers, making the network recursive in terms of abstraction levels in representation. The decoding process is non-sequential. Results for 6 large-scale paragraph datasets are reported, in 3 languages including Arabic, Chinese and English. Analyses are conducted to study several properties of the proposed model. Experiments are presented to verify that the auto-encoder can learn useful representations.
In the fourth part of the dissertation, we use the improved design from the previous auto-encoding model to text classification, adding comparisons between residual and dense connections. This further validates the choice of the architecture we made for the auto-encoding model, and the effectiveness of the recursive architecture with residual or dense connections.
-
Ph.D. Thesis
2019
Unsupervised Learning with Regularized Autoencoders
Zhao, Junbo
Abstract
|
PDF
Title: Unsupervised Learning with Regularized Autoencoders
Candidate: Zhao, Junbo
Advisor(s): LeCun, Yann
Abstract:
Deep learning has enjoyed remarkable successes in a variety of domains. These successes often emerge at the cost of large annotated datasets and training computationally heavy neural network models. The learning paradigm for this is called supervised learning. However, to reduce the sample complexity while improving the universality of the trained models is a crucial next step that may to artificial intelligence. Unsupervised Learning, in contrast to supervised learning, aims to build neural network models with more generic loss objectives requiring little or no labelling effort, and therefore it does not reside on any specific domain-task. In spite of the brevity of its goal, unsupervised learning is a broad topic that relates or includes several sub-fields, such as density estimation, generative modeling, world model and etc. In this thesis, we primarily adopt an energy-based view unifying these different fields. A desired energy function reflects the data manifold by differentiating the energy assigned to the points on the data manifold against points off the manifold. With this foundation, we first cast the popular autoencoder and adversarial learning framework into an energy-based perspective, and then propose several technique or architectures with a motivation to learn better-shaped energy function. We also show that the proposed techniques in this thesis cover a wide spectrum of applications including image/text generative modeling, text summarization, style-transfer without aligned data, transfer/semi-supervised learning on both computer vision and natural language processing. The thesis is organized as follows. First, we assess the validity and the main challenges of energy-based learning. We then introduce two frameworks focusing on strengthening autoencoders by building unit connection hierarchies via either hard-coded pooling or self-learned graphs. Finally, we propose several systematic regularization techniques, based on adversarial training and vector discretization.
-
Ph.D. Thesis
2019
Unsupervised Learning with Regularized Autoencoders
Zhao, Junbo
Abstract
|
PDF
Title: Unsupervised Learning with Regularized Autoencoders
Candidate: Zhao, Junbo
Advisor(s): Yann LeCun
Abstract:
Deep learning has enjoyed remarkable successes in a variety of domains.These successes often emerge at the cost of large annotated datasets and training computationally heavy neural network models.The learning paradigm for this is called \emph{supervised learning}. However, to reduce the sample complexity while improving the universality of the trained models is a crucial next step that may to artificial intelligence. \emph{Unsupervised Learning}, in contrast to supervised learning, aims to build neural network models with more generic loss objectives requiring little or no labelling effort, and therefore it does not reside on any specific domain-task. In spite of the brevity of its goal, unsupervised learning is a broad topic that relates or includes several sub-fields, such as density estimation, generative modeling, world model and etc. In this thesis, we primarily adopt an energy-based view unifying these different fields~\citep{lecun2006tutorial}. A desired energy function reflects the data manifold by differentiating the energy assigned to the points on the data manifold against points off the manifold. With this foundation, we first cast the popular autoencoder and adversarial learning framework into an energy-based perspective, and then propose several technique or architectures with a motivation to learn better-shaped energy function. We also show that the proposed techniques in this thesis cover a wide spectrum of applications including image/text generative modeling, text summarization, style-transfer without aligned data, transfer/semi-supervised learning on both computer vision and natural language processing. The thesis is organized as follows. First, we assess the validity and the main challenges of energy-based learning. We then introduce two frameworks focusing on strengthening autoencoders by building unit connection hierarchies via either hard-coded pooling or self-learned graphs. Finally, we propose several systematic regularization techniques, based on adversarial training and vector discretization.
-
TR2018-990
2018
Platform Migrator
Contractor, Munir;
Pradal, Christophe; Shasha, Dennis
Abstract
|
PDF
Title: Platform Migrator
Author(s): Contractor, Munir; Pradal, Christophe; Shasha, Dennis
Abstract:
Currently, one of the major problems in software development and maintenance, specially in academia, is managing packages across time and systems. An application developed under a particular package manager using a certain set of packages does not always work reliably when ported to a different system or when abandoned for a period of time and picked up again with newer versions of the packages. In this report, we provide and describe Platform Migrator, a software that makes it easy to test applications across systems by identifying various packages in the base system, figuring out their corresponding equivalents in the new system and testing whether the software works as expected on the new system. Platform migrator can migrate software written and set up inside a conda environment to any Linux based system with conda or some other package manager. The philosophy of platform migrator is to identify a closure of the required dependencies for the software being migrated using the conda environment metadata and then use that closure to install the various dependencies on the target system. This documentation provides comprehensive details on how to use platform migrator and what it does internally to migrate software from one system to another. It also contains tutorials and case studies that can be replicated for better understanding of the process.
-
Ph.D. Thesis
2018
Deep Generative Models of Images and Video
Denton, Emily Lynn
Abstract
|
PDF
Title: Deep Generative Models of Images and Video
Candidate: Denton, Emily Lynn
Advisor(s): Fergus, Rob
Abstract:
Deep neural networks have seen wide success in the supervised setting in recent years. Many of these successes rely heavily on large training sets of manually annotated data. Given the difficulty of obtaining enough labeled data to scale many deep learning approaches, it is increasingly important to look for better methods of utilizing large amounts of unlabeled data. Building generative models of images and video is a fundamental paradigm of learning from unlabeled data. Unsupervised criterion based on generating or reconstructing images drive many representation learning frameworks. Video is a particularly appealing domain for unsupervised learning due to the inherent temporal structure of the data. This structure lends itself to representation learning approaches based on extracting invariances and predicting future frames, given the past.
Additionally, building accurate models of the world that facilitate future prediction can be useful for model based reinforcement learning, planning, and more generally, endowing an agent with the capacity to reason about its environment. Incorporating predictive models can potentially help alleviate the sample inefficiency of many reinforcement learning systems.
In this thesis, we review the challenges associated with generating images and videos. We then introduce a multi-scale image generation framework that demonstrates impressive performance on real world image datasets. This method was the first to demonstrate empirically the potential of generative adversarial networks. We also address two challenging aspects of video generation:learning a latent space that affords easier prediction and modeling the uncertainty in video sequences.
-
M.S. Thesis
2018
Detecting Dead Weights and Units in Neural Networks
Evci, Utku
Abstract
|
PDF
Title: Detecting Dead Weights and Units in Neural Networks
Candidate: Evci, Utku
Advisor(s): Fergus, Rob
Abstract:
Deep Neural Networks are highly over-parameterized and the size of the neural networks can be reduced significantly after training without any decrease in performance. One can clearly see this phenomenon in a wide range of architectures trained for various problems. Weight/channel pruning, distillation, quantization, matrix factorization are some of the main methods one can use to remove the redundancy to come up with smaller and faster models.
This work starts with a short informative chapter, where we motivate the pruning idea and provide the necessary notation. In the second chapter, we compare various saliency scores in the context of parameter pruning. Using the insights obtained from this comparison and stating the problems it brings we motivate why pruning units instead of the individual parameters might be a better idea. We propose some set of definitions to quantify and analyze units that don't learn and create any useful information. We propose an efficient way for detecting dead units and use it to select which units to prune. We get 5x model size reduction through unit-wise pruning on MNIST.
-
Ph.D. Thesis
2018
Deep Networks for Forward Prediction and Planning
Henaff, Mikael Bruce
Abstract
|
PDF
Title: Deep Networks for Forward Prediction and Planning
Candidate: Henaff, Mikael Bruce
Advisor(s): LeCun, Yann
Abstract:
Learning to predict how an environment will evolve and the consequences of one’s actions is an important ability for autonomous agents, and can enable planning with relatively few interactions with the environment which may be slow or costly. However, learning an accurate forward model is often difficult in practice due to several features often present in complex environments. First, many environments exhibit long-term dependencies which require the system to learn to record and maintain relevant information in its memory over long timescales. Second, the environment may only be partially observed, and the aspects of the environment which are observed may depend on parts of the environment which are hidden. Third, many observed processes contain some form of apparent or inherent stochasticity, which makes the task of predicting future states ill-defined. In this thesis, we propose approaches to tackle and better understand these different challenges associated with learning predictive models of the environment and using them for planning. We first provide an analysis of recurrent neural network (RNN) memory, which sheds light on the mechanisms by which RNNs are able to store different types of information in their memory over long timescales through the analysis of two synthetic benchmark tasks. We then introduce a new neural network architecture which keeps an estimate of the state of the environment in its memory, and can deal with partial observability by reasoning based on what is observed. We next present a new method for performing planning using a learned model of the environment with both discrete and continuous actions. Finally, we propose an approach for model-based planning in the presence of both environment uncertainty and model uncertainty, and evaluate it on a new real-world dataset and environment with applications to autonomous driving.
-
Ph.D. Thesis
2018
Learning Representations of Text through Language and Discourse Modeling: From Characters to Sentences
Jernite, Yacine
Abstract
|
PDF
Title: Learning Representations of Text through Language and Discourse Modeling: From Characters to Sentences
Candidate: Jernite, Yacine
Advisor(s): Sontag, David
Abstract:
In this thesis, we consider the problem of obtaining a representation of the meaning expressed in a text. How to do so correctly remains a largely open problem, combining a number of inter-related questions (e.g. what is the role of context in interpreting text? how should language understanding models handle compositionality? etc...) In this work, after reflecting on some of these questions and describing the most common sequence modeling paradigms in use in recent work, we focus on two specifically: what level of granularity text should be read at, and what training objectives can lead models to learn useful representations of a text’s meaning.
In a first part, we argue for the use of sub-word information for that purpose, and present new neural network architectures which can either process words in a way that takes advantage of morphological information, or do away with word separations altogether while still being able to identify relevant units of meaning.
The second part starts by arguing for the use of language modeling as a learning objective, and provides algorithms which can help with its scalability issues and propose a globally rather than locally normalized probability distribution. It then explores the question of what makes a good language learning objective, and introduces discriminative objectives inspired by the notion of discourse coherence which help learn a representation of the meaning of sentences.
-
Ph.D. Thesis
2018
Deep Learning for Information Extraction
Nguyen, Thien Huu
Abstract
|
PDF
Title: Deep Learning for Information Extraction
Candidate: Nguyen, Thien Huu
Advisor(s): Grishman, Ralph
Abstract:
The explosion of data has made it crucial to analyze the data and distill important information effectively and efficiently. A significant part of such data is presented in unstructured and free-text documents. This has prompted the development of the techniques for information extraction that allow computers to automatically extract structured information from the natural free-text data. Information extraction is a branch of natural language processing in artificial intelligence that has a wide range of applications, including question answering, knowledge base population, information retrieval etc. The traditional approach for information extraction has mainly involved hand-designing large feature sets (feature engineering) for different information extraction problems, i.e, entity mention detection, relation extraction, coreference resolution, event extraction, and entity linking. This approach is limited by the laborious and expensive effort required for feature engineering for different domains, and suffers from the unseen word/feature problem of natural languages.
This dissertation explores a different approach for information extraction that uses deep learning to automate the representation learning process and generate more effective features. Deep learning is a subfield of machine learning that uses multiple layers of connections to reveal the underlying representations of data. I develop the fundamental deep learning models for information extraction problems and demonstrate their benefits through systematic experiments.
First, I examine word embeddings, a general word representation that is produced by training a deep learning model on a large unlabelled dataset. I introduce methods to use word embeddings to obtain new features that generalize well across domains for relation extraction. This is done for both the feature-based method and the kernel-based method of relation extraction.
Second, I investigate deep learning models for different problems, including entity mention detection, relation extraction and event detection. I develop new mechanisms and network architectures that allow deep learning to model the structures of information extraction problems more effectively. Some extensive experiments are conducted on the domain adaptation and transfer learning settings to highlight the generalization advantage of the deep learning models for information extraction.
Finally, I investigate the joint frameworks to simultaneously solve several information extraction problems and benefit from the inter-dependencies among these problems. I design a novel memory augmented network for deep learning to properly exploit such inter-dependencies. I demonstrate the effectiveness of this network on two important problems of information extraction, i.e, event extraction and entity linking.
-
M.S. Thesis
2018
Classifying the Quality of Movement via Motion Capture and Machine Learning
Saxe, Ryan
Abstract
|
PDF
Title: Classifying the Quality of Movement via Motion Capture and Machine Learning
Candidate: Saxe, Ryan
Advisor(s): Shasha, Dennis
Abstract:
With the recent surge of Machine Vision technology and available video data, computational methods that utilize this data are becoming increasingly important. This Thesis shows that, with the proper application of Skeletal Tracking, it is possible to discern whether or not a physical task — a squat — is performed well. The Skeletal Tracking software employed is provided by Optitrack’s Motion Capture client, Motive:Body. The data generated from Optitrack was used to extract features related to the proper execution of a squat. This thesis uses a variety of machine learning techniques to evalute the quality of physical performance. Support Vector Machines, Random Forests, and Decision Tree algorithms were tested with ten-fold cross validation, and compared to a baseline of Logistic Regression given the binary nature of the problem. While Regression performed at 66% accuracy, all three other algorithms performed substantially better, with Decision Trees performing best at 80%.
-
Ph.D. Thesis
2018
Accelerating Approximate Simulation with Deep Learning
Schlachter, Kristofer
Abstract
|
PDF
Title: Accelerating Approximate Simulation with Deep Learning
Candidate: Schlachter, Kristofer
Advisor(s): Perlin, Ken
Abstract:
Once a simulation resorts to an approximate numerical solution one is faced with various tradeoffs in accuracy versus computation time. We propose that another approximate solution can be learned for two chosen simulations, which in our case, are just as useful but can be made faster to compute. The two problems addressed in this thesis are fluid simulation and the simulation of diffuse inter-reflection in computer graphics.
Real-time simulation of fluid and smoke is a long standing problem in computer graphics, where state-of-the-art approaches require large compute resources, making real-time applications often impractical. In this work, we propose a data-driven approach that leverages the approximation power of deep-learning methods with the precision of standard fluid solvers to obtain both fast and highly realistic simulations. The proposed method solves the incompressible Euler equations following the standard operator splitting method in which a large, often ill-condition linear system must be solved. We propose replacing this system by learning a Convolutional Network (ConvNet) from a training set of simulations using a semi-supervised learning method to minimize long-term velocity divergence.
ConvNets are amenable to efficient GPU implementations and, unlike exact iterative solvers, have fixed computational complexity and latency. The proposed hybrid approach restricts the learning task to a linear projection without modeling the well understood advection and body forces. We present real-time 2D and 3D simulations of fluids and smoke; the obtained results are realistic and show good generalization properties to unseen geometry.
The next simulation that we address is the synthesis of images for training convnets. A challenge with training deep learning models is that they commonly require a large corpus of training data and retrieving sufficient real world data may be unachievable. A solution to this problem can be found in the use of synthetic or simulated training data. However, for simulated photographs or renderings, there hasn't been a systematic approach to comparing the relative benefits of different techniques in image synthesis.
We compare multiple synthesis techniques to one another as well as the real data that they seek to replicate. We also introduce learned synthesis techniques that either train models better than the most realistic graphical methods used by standard rendering packages or else approach their fidelity using far less computation. We accomplish this by learning shading of geometry as well as denoising the results of low sample Monte Carlo image synthesis. Our major contributions are (i) a dataset that allows comparison of real and synthetic versions of the same scene, (ii) an augmented data representation that boosts the stability of learning, and (iii) three different partially differentiable rendering techniques where lighting, denoising and shading are learned. Finally we are able to generate datasets that can outperform full global illumination rendering and approach the performance of training on real data.
-
TR2018-989
2018
On the Solution of Elliptic Partial Differential Equations on Regions with Corners III: Curved Boundaries
Serkh, Kirill
Abstract
|
PDF
Title: On the Solution of Elliptic Partial Differential Equations on Regions with Corners III: Curved Boundaries
Author(s): Serkh, Kirill
Abstract:
In this report we investigate the solution of boundary value problems for elliptic partial differential equations on domains with corners. Previously, we observed that when, in the case of polygonal domains, the boundary value problems are formulated as boundary integral equations of classical potential theory, the solutions are representable by series of certain elementary functions. Here, we extend this observation to the general case of regions with boundaries consisting of analytic curves meeting at corners. We show that the solutions near the corners have the same leading terms as in the polygonal case, plus a series of corrections involving products of the leading terms with integer powers and powers of logarithms. Furthermore, we show that if the curve in the vicinity of a corner approximates a polygon to order \(k\), then the correction added to the leading terms will vanish like \(O(t^k)\), where \(t\) is the distance from the corner.
-
TR2018-991
2018
Robotic Room Traversal using Optical Range Finding
Smith, Cole;
Lin, Eric; Shasha, Dennis
Abstract
|
PDF
Title: Robotic Room Traversal using Optical Range Finding
Author(s): Smith, Cole; Lin, Eric; Shasha, Dennis
Abstract:
Consider the goal of visiting every part of a room that is not blocked by obstacles. Doing so efficiently requires both sensors and planning. Our findings suggest a method of inexpensive optical range finding for robotic room traversal. Our room traversal algorithm relies upon the approximate distance from the robot to the nearest obstacle in 360 degrees. We then choose the path with the furthest approximate distance. Since millimeter-precision is not required for our problem, we have opted to develop our own laser range finding solution, in lieu of using more common, but also expensive solutions like light detection and ranging (LIDAR). Rather, our solution uses a laser that casts a visible dot on the target and a common camera (an iPhone, for example). Based upon where in the camera frame the laser dot is detected, we may calculate an angle between our target and the laser aperture. Using this angle and the known distance between the camera eye and the laser aperture, we may solve all sides of a trigonometric model which provides the distance between the robot and the target.
-
Ph.D. Thesis
2018
Elements of Intelligence: Memory, Communication and Intrinsic Motivation
Sukhbaatar, Sainbayar
Abstract
|
PDF
Title: Elements of Intelligence: Memory, Communication and Intrinsic Motivation
Candidate: Sukhbaatar, Sainbayar
Advisor(s): Fergus, Rob
Abstract:
Building an intelligent agent that can learn and adapt to its environment has always been a challenging task. This is because intelligence consists of many different elements such as recognition, memory, and planning. In recent years, deep learning has shown impressive results in recognition tasks. The aim of this thesis is to advance the deep learning techniques to other elements of intelligence.
We start our investigation with memory, an integral part of intelligence that bridges past experience with current decision making. In particular, we focus on the episodic memory, which is responsible for storing our past experiences and recalling them. An agent without such memory will struggle at many tasks such as having a coherent conversation. We show that a neural network with an external memory is better at such tasks, outperforming traditional recurrent networks with an internal memory.
Another crucial ingredient of intelligence is the capability to communicate with others. In particular, communication is essential for cooperative tasks, enabling agents to better collaborate and improve their division of labor. We investigate whether agents can learn to communicate from scratch without any external supervision. Our finding is that communication through a continuous vector facilitates faster learning by allowing gradients to flow between agents.
Lastly, an intelligent agent must have an intrinsic motivation to learn about its environment on its own without any external supervision or rewards. Our investigation led to one such learning strategy where an agent plays a two-role game with itself. The first role proposes a task, and the second role tries to execute it. Since their goal is to make the other fail, their adversarial interplay pushes them to explore increasingly complex tasks, which results in a better understanding of the environment.
-
Ph.D. Thesis
2018
Rethinking Customer Segmentation and Demand Learning in the Presence of Sparse, Diverse, and Large-scale Data
Venkataraman, Ashwin
Abstract
|
PDF
Title: Rethinking Customer Segmentation and Demand Learning in the Presence of Sparse, Diverse, and Large-scale Data
Candidate: Venkataraman, Ashwin
Advisor(s): Jagabathula, Srikanth; Subramanian, Lakshminarayanan
Abstract:
Firms are now able to collect unprecedented amounts of data. This wealth of data provides new opportunities and capabilities for the firm to better solve classical problems within operational and marketing contexts, such as customer segmentation and demand learning. At the same time, the data imposes new challenges. In addition to its large-scale nature which creates computational issues, the data comes from a diversity of sources, varying in their respective measurement scales (e.g., clicks, ratings, purchase signals, etc.), and is typically sparse, containing a large fraction of missing observations. The diversity in the data makes it hard to directly compare different observations (clicks vs purchases, for instance) and the severe sparsity precludes any meaningful imputations of unobserved entries. The data also comes from unreliable sources, which introduce both unintentional and deliberate errors. The identities of such sources is very often unknown, which makes it difficult to determine which sources to trust.
These data challenges require a rethink of traditional techniques for customer segmentation and demand learning. Given their importance and widespread use, this dissertation revisits the classical problems of customer segmentation and demand learning but in the presence of sparse, diverse, and large-scale data. The key contribution of the dissertation is a suite of novel methodologies to deal with the challenges described above.
Part I of the dissertation focuses on the problem of customer segmentation. In Chapter 1, we consider the problem of segmenting (or clustering) a large population of customers based on their preferences, when the preference signals (e.g., clicks, ratings, etc.) come from a multitude of diverse data sources and each customer provides only a few observations. These data characteristics preclude the applicability of traditional marketing techniques as well as standard clustering approaches in machine learning. We propose a model-based embedding technique which takes the customer observations and a probabilistic model class generating the observations as inputs, and outputs an embedding—a low-dimensional vector representation in Euclidean space—for each customer. We then cluster the embeddings to obtain the segments. We show that our segmentation technique can be used to generate highly accurate personalized recommendations in two real-world case studies, including up to 8% improvement over the existing approach on an eBay dataset consisting of millions of customers and items. In addition, it outperforms (both in speed and accuracy) standard techniques in marketing and machine learning.
In Chapter 2, we turn our attention to the domain of crowdsourced labeling, which provides a low-cost, easy and scalable way to collect labels from the crowd—composed of "workers"—which are then aggregated and used as inputs for training machine learning applications. The main challenge is that workers are often unreliable, and therefore can introduce unintentional or even intentional errors into the labels. The reliabilities of the workers are a priori unknown, so correctly aggregating the labels becomes difficult. We propose algorithms to separate the worker population into two segments, what we call "honest" and "adversarial" workers. Honest workers can provide incorrect labels, but their errors are probabilistic and therefore, can be corrected. Adversarial workers, on the other hand, adopt arbitrary labeling strategies (whether deterministic or probabilistic) and therefore, their labels cannot be trusted. We demonstrate that discarding the labels provided by even a few adversarial workers can significantly improve the accuracy of several existing approaches for aggregating the labels in real-world crowdsourcing datasets.
Part II is devoted to demand learning. In Chapter 3, we consider the problem of learning customer demand for a set of substitutable products. Within operations, the customer demand is typically modeled using a mixture of logit models, which can capture heterogeneity as well as rich substitution patterns in customer preferences. The mixture model is fit to historical sales transactions and inventory data, and the fitted model is used to inform pricing and assortment decisions. We propose a novel nonparametric estimator for the mixture of logit models, providing the ability to make effective use of the large amounts of transaction data that firms have access to. By contrast, most existing techniques impose parametric assumptions—usually driven by tractability considerations—on the mixing distribution, and consequently can suffer from model misspecification issues. We show that our estimator is able to recover good approximations of different ground-truth mixing distributions—despite having no knowledge of their underlying structure—and outperforms the standard expectation-maximization (EM) benchmark in predictive and decision accuracies, while being an order of magnitude faster. -
Ph.D. Thesis
2017
On Quadtrees, Voronoi Diagrams, and Lattices: Results in Geometric Algorithms
Bennett, Huxley
Abstract
|
PDF
Title: On Quadtrees, Voronoi Diagrams, and Lattices: Results in Geometric Algorithms
Candidate: Bennett, Huxley
Advisor(s): Yap, Chee
Abstract:
We present several results on geometric algorithms, and somewhat more specifically on algorithmic aspects of geometric structures including quadtrees, Voronoi diagrams, and lattices. Our work contains two parts, the first of which is on subdivision algorithms, and the second of which is on lattice algorithms.
Subdivision algorithms amount to recursively splitting an ambient space into smaller pieces until certain conditions hold. Often the underlying space is a square in the plane (or a box in higher dimensions), whose subdivision is represented by a quadtree (or its higher-dimensional analogs). A quadtree is smooth if any two adjacent leaf boxes differ by at most one in depth. We first study the cost of the smooth split operation in quadtrees, showing that it has constant amortized cost in quadtrees of any fixed dimension.
We then present a subdivision-based algorithm for computing isotopic epsilon-approximations of planar minimization diagrams. Given a family of continuous functions, its minimization diagram partitions the plane into regions on which each function is minimal. Minimization diagrams generalize many natural Voronoi diagrams, and we show how to use our framework to compute an anisotropic Voronoi diagram on polygonal sites. We have implemented a prototype of our algorithm for anisotropic Voronoi diagrams, and we provide experimental results.
We then turn to studying lattice algorithms. A lattice is a regular ordering of points in Euclidean space, which is represented as the set of all integer combinations of some linearly independent vectors (which we call a basis of the lattice). In our first work on lattices, we introduce and study the Lattice Distortion Problem (LDP). LDP asks how "similar" two lattices are, i.e., what the minimum distortion of a linear bijection between two lattices is. We show how to compute low-distortion mappings with a tradeoff between approximation quality and running time based on a notion of basis reduction introduced by Seysen (Combinatorica 1993). We also show that LDP is NP-hard to approximate to within any constant factor (under randomized reductions).
Finally, we study the problem of finding lattice bases which are optimal with respect to two basis quality measures. Namely, we study the problem of finding bases with minimal orthogonality defect, and with nearly minimal Seysen condition number. We give algorithms which solve both problems while running in time depending only on the rank of the lattice times a polynomial in the input length.
-
Ph.D. Thesis
2017
Improving Event Extraction: Casting a Wider Net
Cao, Kai
Abstract
|
PDF
Title: Improving Event Extraction: Casting a Wider Net
Candidate: Cao, Kai
Advisor(s): Grishman, Ralph
Abstract:
Information extraction is the task of automatically extracting structured information from unstructured and/or semi-structured machine-readable documents. One facet of information extraction is event extraction (EE): identifying instances of selected types of events appearing in natural language text. For each instance, EE should identify the type of the event, the event trigger (the word or phrase which evokes the event), the participants in the event, and (where possible) the time and place of the event.
One EE task was defined and intensively studied as part of the ACE (Automatic Content Extraction) research program. The 2005 ACE EE task involved 8 types and 33 subtypes of events. For instance, given the sentence "She was killed by an automobile yesterday.", an EE system should be able to recognize the word "killed" as a trigger for an event of subtype DIE, and discover "an automobile" and "yesterday" as the Agent and Time arguments. This task is quite challenging, as the same event might appear in the form of various trigger expressions and an expression might represent different types of events in different contexts.
To support the development and evaluation of ACE EE systems, the Linguistic Data Consortium annotated a text corpus (consisting primarily of news articles) with information on the events mentioned. This corpus was widely used to train ACE EE systems. However, the event instances in the ACE corpus are not evenly distributed, and so some frequent expressions involving ACE events do not appear in the training data, adversely affecting performance.
The thesis presents several strategies for improving the performance of EE. We first demonstrate the effectiveness of two types of linguistic analysis -- dependency regularization and Abstract Meaning Representation -- in boosting EE performance. Next we show the benefit of an active learning strategy in which a person is asked to judge a limited number of phrases which may be event triggers. Finally we report the impact of combining our baseline system with event patterns from a system developed for a different EE task (the TABARI program). This step contains expert-level patterns generated by other research groups. Because the information received is complicated and quite different from the original corpus (ACE), the integration of this information requires more complex processing.
-
TR2017-987
2017
On the Design of Small Coarse Spaces for Domain Decomposition Algorithms
Dohrmann, Clark;
Widlund, Olof
Abstract
|
PDF
Title: On the Design of Small Coarse Spaces for Domain Decomposition Algorithms
Author(s): Dohrmann, Clark; Widlund, Olof
Abstract:
Methods are presented for automatically constructing %low-dimensional coarse spaces of low dimension for domain decomposition algorithms. These constructions use equivalence classes of nodes on the interface between the subdomains into which the domain of a given elliptic problem has been subdivided, e.g., by a mesh partitioner such as METIS; these equivalence classes already play a central role in the design, analysis, and programming of many domain decomposition algorithms. The coarse space elements are well defined even for irregular subdomains, are continuous, and well suited for use in two-level or multi-level preconditioners such as overlapping Schwarz algorithms. An analysis for scalar elliptic and linear elasticity problems ms reveals that significant reductions in the coarse space dimension can be achieved while not sacrificing the favorable condition number estimates for larger coarse spaces previously developed. These estimates depend primarily on the Lipschitz parameters of the subdomains. Numerical examples for problems in three dimensions are presented to illustrate the methods and to confirm the analysis. In some of the experiments, the coefficients have large discontinuities across the interface between the subdomains, and in some, the subdomains are generated by mesh partitioners.
-
Ph.D. Thesis
2017
Random Growth Models
Florescu, Laura
Abstract
|
PDF
Title: Random Growth Models
Candidate: Florescu, Laura
Advisor(s): Spencer, Joel
Abstract:
This work explores variations of randomness in networks, and more specifically, how drastically the dynamics and structure of a network change when a little bit of information is added to "chaos". On one hand, I investigate how much determinism in diffusions de-randomizes the process, and on the other hand, I look at how superposing "planted" information on a random network changes its structure in such a way that the "planted" structure can be recovered.
The first part of the dissertation is concerned with rotor-router walks, a deterministic counterpart to random walk, which is the mathematical model of a path consisting of a succession of random steps. I study and show results on the volume (``the range") of the territory explored by the random rotor-router model, confirming an old prediction of physicists.
The second major part in the dissertation consists of two constrained diffusion problems. The questions in this model are to understand the long-term behavior of the models, as well as how the boundary of the processes evolves in time.
The third part is detecting communities in, or more generally, clustering networks. This is a fundamental problem in mathematics, machine learning, biology and economics, both for its theoretical foundations as well as for its practical implications. This problem can be viewed as "planting" some structure in a random network; for example, in cryptography, a code can be viewed as hiding some integers in a random sequence. For such a model with two communities, I show both information theoretic thresholds when it is impossible to recover the communities based on the density of the edges "planted" between the communities, as well as thresholds for when it is computationally possible to recover the communities.
-
Ph.D. Thesis
2017
Zero-knowledge Proofs: Efficient Techniques for Combination Statements and their Applications
Ganesh, Chaya
Abstract
|
PDF
Title: Zero-knowledge Proofs: Efficient Techniques for Combination Statements and their Applications
Candidate: Ganesh, Chaya
Advisor(s): Dodis, Yevgeniy
Abstract:
Zero-knowledge proofs provide a powerful tool, which allows a prover to convince a verifier that a statement is true without revealing any further information. It is known that every language in NP has a zero knowledge proof system, thus opening up several cryptographic applications. While true in theory, designing proof systems that are efficient to be used in practice remains challenging. The most common and most efficient systems implemented are approaches based on sigma protocols, and approaches based on SNARKs (Succinct Non-interactive ARguments of Knowledge). Each approach has its own advantages and shortcomings, and are suited for certain statements.
While sigma protocols are efficient for algebraic statements, they are expensive for non-algebraic statements. SNARKs, on the other hand, result in short proofs and efficient verification, and are better suited for proving statements about hash functions. But proving an algebraic statement, for instance, knowledge of discrete logarithm, is expensive as the prover needs to perform public-key operations proportional to the size of the circuit.
Recent work achieve zero-knowledge proofs that are efficient for statements phrased as Boolean circuits based on Garbled circuits (GC). This, again, is expensive for large circuits, in addition to being inherently interactive. Thus, SNARKs and GC-based approaches are better suited for non-algebraic statements, and sigma protocols are efficient for algebraic statements.
But in some applications, one is interested in proving combination statements, that is, statements that have both algebraic and non-algebraic components. The state of the art fails to take advantage of the best of all worlds and has to forgo the efficiency of one approach to obtain the other's. In this work, we ask how to efficiently prove a statement that is a combination of algebraic and non-algebraic statements.
We first show how to combine the GC-based approach with sigma protocols. Then, we study how to combine sigma protocol proofs with SNARKs to obtain non-interactive arguments for combination statements. We show applications of our techniques to anonymous credentials, and privacy-preserving protocols on the blockchain. Finally, we study garbled circuits as a primitive and present an efficient way of hashing garbled circuits. We show applications of our hashing technique, including application to GC-based zero-knowledge.
-
Ph.D. Thesis
2017
Circuit Complexity: New Techniques and Their Limitations
Golovnev, Aleksandr
Abstract
|
PDF
Title: Circuit Complexity: New Techniques and Their Limitations
Candidate: Golovnev, Aleksandr
Advisor(s): Dodis, Yevgeniy; Regev, Oded
Abstract:
We study the problem of proving circuit lower bounds. The strongest known lower bound of 3n-o(n) for an explicit function was proven by Blum in 1984. We prove a lower bound of (3+1/86)n-o(n) for affine dispersers for sublinear dimensions.
We introduce the weighted gate elimination method to give an elementary proof of a 3.11n lower bound for quadratic dispersers. (Although currently there are no explicit constructions of such functions.) Also, we develop a general framework which allows us to turn lower bounds proofs into upper bounds for Circuit SAT algorithms.
Finally, we prove strong limitations of the developed techniques.
-
Ph.D. Thesis
2017
Unsupervised Learning Under Uncertainty
Mathieu, Michael
Abstract
|
PDF
Title: Unsupervised Learning Under Uncertainty
Candidate: Mathieu, Michael
Advisor(s): LeCun, Yann
Abstract:
Deep learning, in particular neural networks, achieved remarkable success in the recent years. However, most of it is based on supervised learning, and relies on ever larger datasets, and immense computing power. One step towards general artificial intelligence is to build a model of the world, with enough knowledge to acquire a kind of “common sense”. Representations learned by such a model could be reused in a number of other tasks. It would reduce the requirement for labeled samples and possibly acquire a deeper understanding of the problem. The vast quantities of knowledge required to build common sense preclude the use of supervised learning, and suggest to rely on unsupervised learning instead.
The concept of uncertainty is central to unsupervised learning. The task is usually to learn a complex, multimodal distribution. Density estimation and generative models aim at representing the whole distribution of the data, while predictive learning consists of predicting the state of the world given the context and, more often than not, the prediction is not unique. That may be because the model lacks the capacity or the computing power to make a certain prediction, or because the future depends on parameters that are not part of the observation. Finally, the world can be chaotic of truly stochastic. Representing complex, multimodal continuous distributions with deep neural networks is still an open problem.
In this thesis, we first assess the difficulties of representing probabilities in high dimensional spaces, and review the related work in this domain. We then introduce two methods to address the problem of video prediction, first using a novel form of linearizing auto-encoder and latent variables, and secondly using Generative Adversarial Networks (GANs). We show how GANs can be seen as trainable loss functions to represent uncertainty, then how they can be used to disentangle factors of variation. Finally, we explore a new non-probabilistic framework for GANs.
-
M.S. Thesis
2017
Atypical: A type system for live performances
Nunes, Gabriel Barbosa
Abstract
|
PDF
Title: Atypical: A type system for live performances
Candidate: Nunes, Gabriel Barbosa
Advisor(s): Panozzo, Daniele; Perlin, Ken
Abstract:
Chalktalk is a visual language based around real-time interaction with virtual objects in a blackboard-style environment. Its aim is to be a presentation and communication tool, using animation and interactivity to allow easy illustration of complex topics or ideas. Among many of the capabilities of these virtual objects is the ability to send data from one object to another via a visual linking system. In this paper, we describe a way of making the link system more robust by adding type information to these links, and compare and contrast the requirements of a presentation-oriented visual language with a more traditional programming language.
-
Ph.D. Thesis
2017
Fine-scale Structure Design for 3D Printing
Panetta, Francis Julian
Abstract
|
PDF
Title: Fine-scale Structure Design for 3D Printing
Candidate: Panetta, Francis Julian
Advisor(s): Zorin, Denis
Abstract:
Modern additive fabrication technologies can manufacture shapes whose geometric complexities far exceed what existing computational design tools can analyze or optimize. At the same time, falling costs have placed these fabrication technologies within the average consumer's reach. Especially for inexpert designers, new software tools are needed to take full advantage of 3D printing technology.
My thesis develops such tools and demonstrates the exciting possibilities enabled by fine-tuning objects at the small scales achievable by 3D printing. The thesis applies two high-level ideas to invent these tools: two-scale design and worst-case analysis.
The two-scale design approach addresses the problem that accurately simulating---let alone optimizing---geometry at the full resolution one can print requires orders of magnitude more computational power than currently available. However, we can use periodic homogenization to decompose the design problem into a small-scale problem (designing tileable structures achieving a particular deformation behavior) and a macro-scale problem (deciding where to place these structures in the larger object). We can then design structures for every possible deformation behavior and store them in a database, so that they can be re-used for many different macro-scale design problems.
Worst-case analysis refers to determining how likely an object is to fracture by studying the worst possible scenario: the forces most efficiently breaking it. This analysis is needed when the designer has insufficient knowledge or experience to predict what forces an object will undergo, or when the design is intended for use in many different scenarios unknown a priori.
-
TR2017-988
2017
Isogeometric BDDC Deluxe Preconditioners for Linear Elasticity
Pavarino, Luca F.;
Scacchi, Simone; Widlund, Olof B.; Zampini, Stefano
Abstract
|
PDF
Title: Isogeometric BDDC Deluxe Preconditioners for Linear Elasticity
Author(s): Pavarino, Luca F.; Scacchi, Simone; Widlund, Olof B.; Zampini, Stefano
Abstract:
Balancing domain decomposition by constraints (BDDC) preconditioners have been shown to provide rapidly convergent preconditioned conjugate gradient methods for solving many of the very ill-conditioned systems of algebraic equations which often arise in finite element approximations of a large variety of problems in continuum mechanics. These algorithms have also been developed successfully for problems arising in isogeometric analysis. In particular, the BDDC deluxe version has proven very successful for problems approximated by non-uniform rational B-splines (NURBS), even those of high order and regularity. One main purpose of this paper is to extend the theory, previously fully developed only for scalar elliptic problems in the plane, to problems of linear elasticity in three dimensions. Numerical experiments supporting the theory, are also reported. Some of these experiments highlight the fact that the development of the theory can help to decrease substantially the dimension of the primal space of the BDDC algorithm, which provides the necessary global component of these preconditioners, while maintaining scalability and good convergence rates.
-
Ph.D. Thesis
2017
On the Gaussian Measure Over Lattices
Stephens-Davidowitz, Noah
Abstract
|
PDF
Title: On the Gaussian Measure Over Lattices
Candidate: Stephens-Davidowitz, Noah
Advisor(s): Dodis, Yevgeniy; Regev, Oded
Abstract:
We study the Gaussian mass of a lattice coset \[ \rho_s(\mathcal{L} - \vec{t}) := \sum_{\vec{y} \in \mathcal{L}} \exp(-\pi \|\vec{y} - \vec{t}\|^2/s^2) \; , \] where \(\mathcal{L} \subset \mathbb{R}^n\) is a lattice and \(\vec{t} \in \mathbb{R}^n\) is a vector describing a shift of the lattice. In particular, we use bounds on this Gaussian mass to obtain a partial converse to Minkowski's celebrated theorem bounding the number of lattice points in a ball.
We also consider the discrete Gaussian distribution \(D_{\mathcal{L} - \vec{t}, s}\) induced by the Gaussian measure over \(\mathcal{L} - \vec{t}\), and we use procedures for sampling from this distribution to construct the current fastest known algorithms for the two most important computation problems over lattices, the Shortest Vector Problem (SVP) and the Closest Vector Problem (CVP).
Finally, we study \(\rho_s(\mathcal{L} - \vec{t})\) and \(D_{\mathcal{L} - \vec{t}, s}\) as interesting computational and mathematical objects in their own right. In particular, we show that the computational problem of sampling from \(D_{\mathcal{L} - \vec{t}, s}\) is equivalent to CVP in a very strong sense (and that sampling from \(D_{\mathcal{L}, s}\) is no harder than SVP). We also prove a number of bounds on the moments of \(D_{\mathcal{L} - \vec{t}, s}\) and various monotonicity properties of \(\rho_s(\mathcal{L} - \vec{t})\).
-
M.S. Thesis
2017
Inducing Cooperation Through Virtual Reality
Zhang, Daniel W.
Abstract
|
PDF
Title: Inducing Cooperation Through Virtual Reality
Candidate: Zhang, Daniel W.
Advisor(s): Perlin, Ken
Abstract:
There has been a recent resurgence in Virtual Reality (VR) as a new medium for entertainment and communication. With these potentially exciting developments, I decided to create an experiment to test how people can potentially be influenced by virtual reality interfaces. I had hoped that I could induce people to cooperate better in a virtual reality version of a task compared to an un-augmented version of the task in the regular world. After conducting 16 separate trials, with half in VR and the other half in the regular world, there is no conclusive evidence to completely confirm or deny this hypothesis. I have found evidence to suggest that there can be such an influence, as there were more successes in the VR trials than the regular trials, but they can potentially be explained away by the sample size and the attitudes of participants before starting the experiment. This data suggests that further research in this field can lead to interesting discoveries regarding human behavior in virtual reality environments, and that the Holojam framework invented by the Future Reality Lab at New York University can be very helpful in designing experiments for this research.
-
Ph.D. Thesis
2016
Decision Procedures for Finite Sets with Cardinality, and Local Theories Extensions
Bansal, Kshitij
Abstract
|
PDF
Title: Decision Procedures for Finite Sets with Cardinality, and Local Theories Extensions
Candidate: Bansal, Kshitij
Advisor(s): Barrett, Clark; Wies, Thomas
Abstract:
Many tasks in design, verification, and testing of hardware and computer systems can be reduced to checking satisfiability of logical formulas. Certain fragments of first-order logic that model the semantics of prevalent data types, and hardware and software constructs, such as integers, bit-vectors, and arrays are thus of most interest. The appeal of satisfiability modulo theories (SMT) solvers is that they implement decision procedures for efficiently reasoning about formulas in these fragments. Thus, they can often be used off-the-shelf as automated back-end solvers in verification tools. In this thesis, we expand the scope of SMT solvers by developing decision procedures for new theories of interest in reasoning about hardware and software.
First, we consider the theory of finite sets with cardinality. Sets are a common high-level data structure used in programming; thus, such a theory is useful for modeling program constructs directly. More importantly, sets are a basic construct of mathematics and thus natural to use when mathematically defining the properties of a computer system. We extend a calculus for finite sets to reason about cardinality constraints. The reasoning for cardinality involves tracking how different sets overlap. For an efficient procedure in an SMT solver, we'd like to avoid considering Venn regions directly, which has been the approach in earlier work. We develop a novel technique wherein potentially overlapping regions are considered incrementally. We use a graph to track the interaction of the different regions. Additionally, our technique leverages the procedure for reasoning about the other set operations (besides cardinality) in a modular fashion.
Second, a limitation frequently encountered is that verification problems are often not fully expressible in the theories supported natively by the solvers. Many solvers allow the specification of application-specific theories as quantified axioms, but their handling is incomplete outside of narrow special cases. We show how SMT solvers can be used to obtain complete decision procedures for local theory extensions, an important class of theories that are decidable using finite instantiation of axioms. We present an algorithm that uses E-matching to generate instances incrementally during the search, significantly reducing the number of generated instances compared to eager instantiation strategies.
-
TR2016-986
2016
Detecting Missing and Spurious Edges in Large, Dense Networks Using Parallel Computing
Coolidge, Sam;
Simon, Dan; Shasha, Dennis
Abstract
|
PDF
Title: Detecting Missing and Spurious Edges in Large, Dense Networks Using Parallel Computing
Author(s): Coolidge, Sam; Simon, Dan; Shasha, Dennis
Abstract:
Certain pairs of drugs can cause death from their interaction. Knowledge of such interactions is held in drug interaction networks. The problem is that such networks may miss interactions that should be present and may include interactions that should be absent. Clearly, such information is valuable. Drug interaction networks are not unique in this regard. The same holds for protein-protein interaction networks, ecological networks, and many others. Improving the quality of such networks often requires a ground truth analysis (e.g. more experiments) but Roger Guimerá, Marta Sales-Prado, and their colleagues have shown in several papers that a structural analysis of networks can lead to predictions of missing and spurious edges that can improve those networks. Our contribution in this paper and the accompanying software is to create a program implementing their algorithmic ideas that is parallelizable and easy to modify for researchers who wish to try out new ideas. Our software can be found at https://github.com/samcoolidge/network.
-
Ph.D. Thesis
2016
Analyzing Source Code Across Static Conditionals
Gazzillo, Paul
Abstract
|
PDF
Title: Analyzing Source Code Across Static Conditionals
Candidate: Gazzillo, Paul
Advisor(s): Wies, Thomas
Abstract:
We need better tools for C, such as source browsers, bug finders, and automated refactorings. The problem is that large C systems such as Linux are software product lines, containing thousands of configuration variables controlling every aspect of the software from architecture features to file systems and drivers. The challenge of such configurability is how do software tools accurately analyze all configurations of the source without the exponential explosion of trying them all separately. To this end, we focus on two key subproblems, parsing and the build system. The contributions of this thesis are the following: (1) a configuration-preserving preprocessor and parser called SuperC that preserves configurations in its output syntax tree; (2) a configuration-preserving Makefile evaluator called Kmax that collections Linux's compilation units and their configurations; and (3) a framework for configuration-aware analyses of source code using these tools.
C tools need to process two languages: C itself and the preprocessor. The latter improves expressivity through file includes, macros, and static conditionals. But it operates only on tokens, making it hard to even parse both languages. SuperC is a complete, performant solution to parsing all of C. First, a configuration-preserving preprocessor resolves includes and macros yet leaves static conditionals intact, thus preserving a program's variability. To ensure completeness, we analyze all interactions between preprocessor features and identify techniques for correctly handling them. Second, a configuration-preserving parser generates a well-formed AST with static choice nodes for conditionals. It forks new subparsers when encountering static conditionals and merges them again after the conditionals. To ensure performance, we present a simple algorithm for table-driven Fork-Merge LR parsing and four novel optimizations. We demonstrate SuperC's effectiveness on the x86 Linux kernel.
Large-scale C codebases like Linux are software product families, with complex build systems that tailor the software with myriad features. Such variability management is a challenge for tools, because they need awareness of variability to process all software product lines within the family. With over 14,000 features, processing all of Linux's product lines is infeasible by brute force, and current solutions employ incomplete heuristics. But having the complete set of compilation units with precise variability information is key to static tools such a bug-finders, which could miss critical bugs, and refactoring tools, since behavior-preservation requires a complete view of the software project. Kmax is a new tool for the Linux build system that extracts all compilation units with precise variability information. It processes build system files with a variability-aware \texttt{make} evaluator that stores variables in a conditional symbol table and hoists conditionals around complete statements, while tracking variability information as presence conditions. Kmax is evaluated empirically for correctness and completeness on the Linux kernel. Kmax is compared to previous work for correctness and running time, demonstrating that a complete solution's added complexity incurs only minor latency compared to the incomplete heuristic solutions.
SuperC's configuration-preserving parsing of compilation units and Kmax's project-wide capabilities are in a unique position to process source code across all configurations. Bug-finding is one area where such capability is useful. Bugs may appear in untested combinations of configurations and testing each configuration one-at-a-time is infeasible. For example, one compilation units that defines a global function called by other compilation units may not be linked into the final program due to configuration variable selection. Such a bug can be found with Kmax and SuperC's cross-configuration capability. Cilantro is a framework for creating variability-aware bug-checkers. Kmax is used to determine the complete set of compilation units and the combinations of features that activate them, while SuperC's parsing framework is extended with semantic actions in order implement the checkers. A checker for linker errors across all compilation units in the Linux kernel demonstrates each part of the Cilantro framework and is evaluated on the Linux source code.
-
Ph.D. Thesis
2016
Semi-Supervised Learning for Electronic Phenotyping in Support of Precision Medicine
Halpern, Yonatan
Abstract
|
PDF
Title: Semi-Supervised Learning for Electronic Phenotyping in Support of Precision Medicine
Candidate: Halpern, Yonatan
Advisor(s): Sontag, David
Abstract:
Medical informatics plays an important role in precision medicine, delivering the right information to the right person, at the right time. With the introduction and widespread adoption of electronic medical records, in the United States and world-wide, there is now a tremendous amount of health data available for analysis. Electronic record phenotyping refers to the task of determining, from an electronic medical record entry, a concise descriptor of the patient, comprising of their medical history, current problems, presentation, etc. In inferring such a phenotype descriptor from the record, a computer, in a sense, "understands", the relevant parts of the record. These phenotypes can then be used in downstream applications such as cohort selection for retrospective studies, real-time clinical decision support, contextual displays, intelligent search, and precise alerting mechanisms.
To handle the incomplete data present in medical records, we use the formal framework of probabilistic graphical models with latent or unobserved variables. The first part of the thesis presents two different structural conditions under which learning with latent variables is computationally tractable. The first is the "anchored" condition, where every latent variable has at least one child that is not shared by any other parent. The second is the "singly-coupled" condition, where every latent variable is connected to at least three children that satisfy conditional independence (possibly after a transformation of the data). Variables that satisfy these conditions can be specified by an expert without requiring that the entire structure or its parameters be specified, allowing for effective use of human expertise and making room for statistical learning to do some of the heavy lifting in model learning. For both the anchored and singly-coupled conditions, practical algorithms are presented.
The second part of the thesis describes real-life applications using the anchored condition for electronic phenotyping. A human-in-the-loop learning system and a functioning emergency informatics system for real-time extraction of important clinical variables are described and evaluated.
The algorithms and discussion presented here were developed for the purpose of improving healthcare, but are much more widely applicable, dealing with the very basic questions of identifiability and learning models with latent variables - a problem that lies at the very heart of the natural and social sciences.
-
TR2016-984
2016
Finding Prospects for Shopping Centers: a machine learning approach
Kogan, Jonathan;
Jain, Rishabh; Jean, Joe; Lowrance, Roy; Shasha, Dennis
Abstract
|
PDF
Title: Finding Prospects for Shopping Centers: a machine learning approach
Author(s): Kogan, Jonathan; Jain, Rishabh; Jean, Joe; Lowrance, Roy; Shasha, Dennis
Abstract:
We have developed an algorithm that predicts which store types are the best prospects to fill vacancies in shopping centers given the combinations of stores already there. The model is able to make predictions with accuracies up to 81.62% for the first prediction, 90.05% for the first two predictions, 93.34% for the first three predictions, 95.52% for the first four predictions, and 96.48% for the first five predictions. The p-values with respect to a naïve strategy of choosing the store types that are simply most frequent are all below 0.0001%. This paper explains how the system was built and some user tests, not all of which were positive. The system can be found at http://linserv2.cims.nyu.edu:54321. The code for the project can be found at https://github.com/jgk99/Store-Prospector.
-
Ph.D. Thesis
2016
Improving Knowledge Base Population with Information Extraction
Li, Xiang
Abstract
|
PDF
Title: Improving Knowledge Base Population with Information Extraction
Candidate: Li, Xiang
Advisor(s): Grishman, Ralph
Abstract:
Knowledge Bases (KBs) are data resources that encode world knowledge in machine-readable formats. Knowledge Base Population (KBP) aims at understanding this knowledge and extending KBs with more semantic information, which is a fundamental problem in Artificial Intelligence. It can benefit a wide range of tasks, such as semantic search and question answering. Information Extraction (IE), the task of discovering important types of facts (entities, relations and events) in unstructured text, is necessary and crucial for successfully populating knowledge bases. This dissertation focuses on four essential aspects of knowledge base population by leveraging IE techniques: extracting facts from unstructured data, validating the extracted information, accelerating and enhancing systems with less annotation effort, and utilizing knowledge bases to improve real-world applications.
First, we investigate the Slot Filling task, which is a key component for knowledge base population. Slot filling aims to collect information from a large collection of news, web, or other sources of documents to determine a set of predefined attributes ("slots") for given person and organization entities. We introduce a statistical language understanding approach to automatically construct personal (user-centric) knowledge bases from conversational dialogs.
Second, we consider how to probabilistically estimate the correctness of the extracted slot values. Despite the significant progress of KBP research and systems in recent years, slot filling approaches are still far from completely reliable. Using the NIST KBP Slot Filling task as a case study, we propose a confidence estimation model based on the Maximum Entropy framework, and demonstrate the effectiveness of this model in both precision and the capability to improve the slot filling aggregation through a weighted voting strategy.
Third, we study rich annotation guided learning to fill the gap between an expert annotator and a feature engineer. We develop an algorithm to enrich features with the guidance of all levels of rich annotations from human annotators. We also evaluate the comparative efficacy, generality and scalability of this framework by conducting case studies on three distinct applications in various domains, including facilitating KBP slot filling systems. Empirical studies demonstrate that with little additional annotation time, we can significantly improve the performance for all tasks.
Finally, we explore utilizing knowledge bases in a real-world application - personalized content recommendation. Traditional systems infer user interests from surface-level features derived from online activity logs and user demographic profiles, rather than deeply understanding the context semantics. We conduct a systematic study to show the effectiveness of incorporating deep semantic knowledge encoded in the entities on modeling user interests, by utilizing the abundance of entity information from knowledge bases.
-
Ph.D. Thesis
2016
Improving SAT Solvers by Exploiting Empirical Characteristics of CDCL
Oh, Chanseok
Abstract
|
PDF
Title: Improving SAT Solvers by Exploiting Empirical Characteristics of CDCL
Candidate: Oh, Chanseok
Advisor(s): Wies, Thomas
Abstract:
The Boolean Satisfiability Problem (SAT) is a canonical decision problem originally shown to be NP-complete in Cook’s seminal work on the theory of computational complexity. The SAT problem is one of several computational tasks identified by researchers as core problems in computer science. The existence of an efficient decision procedure for SAT would imply P = NP. However, numerous algorithms and techniques for solving the SAT problem have been proposed in various forms in practical settings. Highly efficient solvers are now actively being used, either directly or as a core engine of a larger system, to solve real-world problems that arise from many application domains. These state-of-the-art solvers use the Davis-Putnam-Logemann-Loveland (DPLL) algorithm extended with ConflictDriven Clause Learning (CDCL). Due to the practical importance of SAT, building a fast SAT solver can have a huge impact on current and prospective applications. The ultimate contribution of this thesis is improving the state of the art of CDCL by understanding and exploiting the empirical characteristics of how CDCL works on real-world problems. The first part of the thesis shows empirically that most of the unsatisfiable real-world problems solvable by CDCL have a refutation proof with near-constant width for the great portion of the proof. Based on this observation, the thesis provides an unconventional perspective that CDCL solvers can solve real-world problems very efficiently and often more efficiently just by maintaining a small set of certain classes of learned clauses. The next part of the thesis focuses on understanding the inherently different natures of satisfiable and unsatisfiable problems and their implications on the empirical workings of CDCL. We examine the varying degree of roles and effects of crucial elements of CDCL based on the satisfiability status of a problem. Ultimately, we propose effective techniques to exploit the new insights about the different natures of proving satisfi- ability and unsatisfiability to improve the state of the art of CDCL. In the last part of the thesis, we present a reference solver that incorporates all the techniques described in the thesis. The design of the presented solver emphasizes minimality in implementation while guaranteeing state-of-the-art performance. Several versions of the reference solver have demonstrated top-notch performance, earning several medals in the annual SAT competitive events. The minimal spirit of the reference solver shows that a simple CDCL framework alone can still be made competitive with state-of-the-art solvers that implement sophisticated techniques outside the CDCL framework.
-
Ph.D. Thesis
2016
Graph-based Approaches to Resolve Entity Ambiguity
Pershina, Maria
Abstract
|
PDF
Title: Graph-based Approaches to Resolve Entity Ambiguity
Candidate: Pershina, Maria
Advisor(s): Grishman, Ralph
Abstract:
Information Extraction is the task of automatically extracting structured information from unstructured or semi-structured machine-readable documents. One of the challenges of Information Extraction is to resolve ambiguity between entities either in a knowledge base or in text documents. There are many variations of this problem and it is known under different names, such as coreference resolution, entity disambiguation, entity linking, entity matching, etc. For example, the task of coreference resolution decides whether two expressions refer to the same entity; entity disambiguation determines how to map an entity mention to an appropriate entity in a knowledge base (KB); the main focus of entity linking is to infer that two entity mentions in a document(s) refer to the same real world entity even if they do not appear in a KB; entity matching (also record deduplication, entity resolution, reference reconciliation) is to merge records from databases if they refer to the same object.
Resolving ambiguity and finding proper matches between entities is an important step for many downstream applications, such as data integration, question answering, relation extraction, etc. The Internet has enabled the creation of a growing number of large-scale knowledge bases in a variety of domains, posing a scalability challenge for Information Extraction systems. Tools for automatically aligning these knowledge bases would make it possible to unify many sources of structured knowledge and to answer complex queries. However the efficient alignment of large-scale knowledge bases still poses a considerable challenge.
Various aspects and different settings to resolve ambiguity between entities are studied in this dissertation. A new scalable domain-independent graph-based approach utilizing Personalized Page Rank is developed for entity matching across large-scale knowledge bases and evaluated on datasets of 110 million and 203 million entities. A new model for entity disambiguation between a document and a knowledge base utilizing a document graph and effectively filtering out noise is proposed. A new technique based on a paraphrase detection model is proposed to recognize name variations for an entity in a document. A new approach integrating a graph-based entity disambiguation model and this technique is presented for an entity linking task and is evaluated on a dataset for  the Text Analysis Conference Entity Discovery and Linking 2014 task.
-
TR2016-985
2016
On the Solution of Elliptic Partial Differential Equations on Regions with Corners II: Detailed Analysis
Serkh, Kirill
Abstract
|
PDF
Title: On the Solution of Elliptic Partial Differential Equations on Regions with Corners II: Detailed Analysis
Author(s): Serkh, Kirill
Abstract:
In this report we investigate the solution of boundary value problems on polygonal domains for elliptic partial differential equations.
-
TR2016-982
2016
Alphacodes: Usable, Secure Transactions with Untrusted Providers using Human Computable Puzzles
Sharma, Ashlesh;
Chandrasekaran, Varun; Amjad, Fareeha; Shasha, Dennis; Subramanian, Lakshminarayanan
Abstract
|
PDF
Title: Alphacodes: Usable, Secure Transactions with Untrusted Providers using Human Computable Puzzles
Author(s): Sharma, Ashlesh; Chandrasekaran, Varun; Amjad, Fareeha; Shasha, Dennis; Subramanian, Lakshminarayanan
Abstract:
Many banking and commerce payment systems, especially in developing regions, continue to require users to share private or sensitive information in clear-text with untrusted providers exposing them to different forms of man-in-the-middle attacks. In this paper, we introduce Alphacodes, a new paradigm that provides a usable security solution that enables users to perform secure transactions with untrusted parties using the notion of visual puzzles. Alphacodes are designed as verification codes for short message transactions and provide easy authentication of critical portions of a transaction. We describe how Alphacodes can be applied in different use cases and also show two simple applications that we have built using the Alphacodes framework. We show security vulnerabilities in existing systems and show how our protocol overcomes them. We also demonstrate the ease of use of Alphacodes with minimal training using two simple mechanical turk studies. Using another simple real world user study involving 10 users who speak Kannada (local Indian language), we show that the Alphacodes concept can be easily extended to other languages beyond English.
-
Ph.D. Thesis
2016
Partition Memory Models for Program Analysis
Wang, Wei
Abstract
|
PDF
Title: Partition Memory Models for Program Analysis
Candidate: Wang, Wei
Advisor(s): Barrett, Clark; Wies, Thomas
Abstract:
Scalability is a key challenge in static program analyses based on solvers for Satisfiability Modulo Theories (SMT). For imperative languages like C, the approach taken for modeling memory can play a significant role in scalability. The main theme of this thesis is using partitioned memory models to divide up memory based on the alias information derived from a points-to analysis.
First, a general analysis framework based on memory partitioning is presented. It incorporates a points-to analysis as a preprocessing step to determine a conservative approximation of which areas of memory may alias or overlap and splits the memory into distinct arrays for each of these areas.
Then we propose a new cell-based field-sensitive points-to analysis, which is an extension of Steensgaard’s unification-based algorithms. A cell is a unit of access with scalar or record type. Arrays and dynamically memory allocations are viewed as a collection of cells. We show how our points-to analysis yields more precise alias information for programs with complex heap data structures.
Our work is implemented in Cascade, a static analysis framework for C programs. It replaces the former flat memory model that models the memory as a single array of bytes. We show that the partitioned memory models achieve better scalability within Cascade, and the cell-based memory model, in particular, improves the performance significantly, making Cascade a state-of-the-art C analyzer.
-
TR2016-981
2016
Scaling Multicore Databases via Constrained Parallel Execution
Wang, Zhaoguo;
Mu, Shuai; Cui, Yang; Yi, Han; Chen, Haibo; Li, Jinyang
Abstract
|
PDF
Title: Scaling Multicore Databases via Constrained Parallel Execution
Author(s): Wang, Zhaoguo; Mu, Shuai; Cui, Yang; Yi, Han; Chen, Haibo; Li, Jinyang
Abstract:
Multicore in-memory databases often rely on traditional concurrency control schemes such as two-phase-locking (2PL) or optimistic concurrency control (OCC). Unfortunately, when the workload exhibits a non-trivial amount of contention, both 2PL and OCC sacrifice much parallel execution opportunity. In this paper, we describe a new concurrency control scheme, interleaving constrained concurrency control (IC3), which provides serializability while allowing for parallel execution of certain conflicting transactions. IC3 combines the static analysis of the transaction workload with runtime techniques that track and enforce dependencies among concurrent transactions. The use of static analysis simplifies IC3’s runtime design, allowing it to scale to many cores. Evaluations on a 64-core machine using the TPC-C benchmark show that IC3 outperforms traditional concurrency control schemes under contention. It achieves the throughput of 434K transactions/sec on the TPC-C benchmark configured with only one warehouse. It also scales better than several recent concurrent control schemes that also target contended workloads.
-
M.S. Thesis
2016
A New Strongly Polynomial Algorithm for Computing Fisher Market Equilibria with Spending Constraint Utilities
Wang, Zi
Abstract
|
PDF
Title: A New Strongly Polynomial Algorithm for Computing Fisher Market Equilibria with Spending Constraint Utilities
Candidate: Wang, Zi
Advisor(s): Cole, Richard
Abstract:
This thesis develops and analyzes an algorithm to compute equilibrium prices for a Fisher market in which the buyer utilities are given by spending constraint functions, utility functions originally defined by Devanur and Vazirani.
Vazirani gave a weakly polynomial time algorithm to compute the equilibrium prices. More recently Vegh gave a strongly polynomial algorithm. Here we provide another strongly polynomial algorithm, which arguably is conceptually simpler, although the running time is not always better.
-
Ph.D. Thesis
2016
Learning Algorithms from Data
Zaremba, Wojciech
Abstract
|
PDF
Title: Learning Algorithms from Data
Candidate: Zaremba, Wojciech
Advisor(s): Fergus, Rob; LeCun, Yann
Abstract:
Statistical machine learning is concerned with learning models that describe observations. We train our models from data on tasks like machine translation or object recognition because we cannot explicitly write down programs to solve such problems. A statistical model is only useful when it generalizes to unseen data. Solomonoff has proved that one should choose the model that agrees with the observed data, while preferring the model that can be compressed the most, because such a choice guarantees the best possible generalization. The size of the best possible compression of the model is called the Kolmogorov complexity of the model. We define an algorithm as a function with small Kolmogorov complexity.
This Ph.D. thesis outlines the problem of learning algorithms from data and shows several partial solutions to it. Our data model is mainly neural networks as they have proven to be successful in various domains like object recognition, language modeling, speech recognition and others. First, we examine empirical trainability limits for classical neural networks. Then, we extend them by providing interfaces, which provide a way to read memory, access the input, and postpone predictions. The model learns how to use them with reinforcement learning techniques like Reinforce and Q-learning. Next, we examine whether contemporary algorithms such as convolution layer can be automatically rediscovered. We show that it is possible indeed to learn convolution as a special case in a broader range of models. Finally, we investigate whether it is directly possible to enumerate short programs and find a solution to a given problem. This follows the original line of thought behind the Solomonoff induction. Our approach is to learn a prior over programs such that we can explore them efficiently.
-
Ph.D. Thesis
2016
Distributed Stochastic Optimization for Deep Learning
Zhang, Sixin
Abstract
|
PDF
Title: Distributed Stochastic Optimization for Deep Learning
Candidate: Zhang, Sixin
Advisor(s): LeCun, Yann
Abstract:
We study the problem of how to distribute the training of large-scale deep learning models in the parallel computing environment. We propose a new distributed stochastic optimization method called Elastic Averaging SGD (EASGD). We analyze the convergence rate of the EASGD method in the synchronous scenario and compare its stability condition with the existing ADMM method in the round-robin scheme. An asynchronous and momentum variant of the EASGD method is applied to train deep convolutional neural networks for image classification on the CIFAR and ImageNet datasets. Our approach accelerates the training and furthermore achieves better test accuracy. It also requires a much smaller amount of communication than other common baseline approaches such as the DOWNPOUR method.
We then investigate the limit in speedup of the initial and the asymptotic phase of the mini-batch SGD, the momentum SGD, and the EASGD methods. We find that the spread of the input data distribution has a big impact on their initial convergence rate and stability region. We also find a surprising connection between the momentum SGD and the EASGD method with a negative moving average rate. A non-convex case is also studied to understand when EASGD can get trapped by a saddle point.
Finally, we scale up the EASGD method by using a tree structured network topology. We show empirically its advantage and challenge. We also establish a connection between the EASGD and the DOWNPOUR method with the classical Jacobi and the Gauss-Seidel method, thus unifying a class of distributed stochastic optimization methods.
-
Ph.D. Thesis
2016
Pushing the Limits of Additive Fabrication Technologies
Zhou, Qingnan (James)
Abstract
|
PDF
Title: Pushing the Limits of Additive Fabrication Technologies
Candidate: Zhou, Qingnan (James)
Advisor(s): Zorin, Denis
Abstract:
A rough symmetry can be observed in the stock price of 3D Systems (NYSE:DDD), the leading and largest 3D printer manufacturer, from its IPO on June 3, 2011 to the beginning of 2016. The price sky rocketed nearly 600% from 2011 to the end of 2013, and took a free fall back to its original value by 2016. Coincidentally, it is also the period during which I got my hands dirty and investigated some of the toughest challenges as well as exciting new possibilities associated with different types of 3D printing technologies. In this thesis, I documented my attempts from 3 different angles to push the limits of 3D printing: printability, microstructure design and robust geometry processing with mesh arrangements.
Printability check has long been the bottleneck that prevents 3D printing from scaling up. Oftentimes, designers of 3D models lack the expertise or tools to ensure 3D printability. 3D printing service providers typically rely human inspections to filter out unprintable designs. This process is manual and error-prone. As designs become ever more complex, manual printability check becomes increasingly difficult. To tackle this problem, my colleagues and I proposed an algorithm to automatically determine structurally weak regions and the worst-case usage scenario to break a given model. We validate the algorithm by physically break a number of real 3D printed designs.
A key distinctive feature of 3D printing technologies is that the cost and time of fabrication is uncorrelated with geometric complexity. This opens up many exciting new possibilities. In particular, by pushing geometric complexity to the extreme, 3D printing has the potential of fabricating soft, deformable shapes with microscopic structures using a single raw material. In our recent SIGGRAPH publication, my colleagues and I have not only demonstrated fabricating microscopic frame structures is possible but also proposed an entire pipeline for designing spatially varying microstructures to satisfy target material properties or deformation goals.
With the boost of 3D printing technologies, 3D models have become more abundant and easily accessible than ever before. These models are sometimes known as "wild" models because they differ significantly in complexity and quality from traditional models in graphics researches. This poses a serious challenge in robustly analyzing 3D designs. Many state-of-the-art geometry processing algorithms/libraries are ill-prepared for dealing with "wild" models that are non-manifold, self-intersecting, locally degenerate and/or containing multiple and possibly nested components. In our most recent SIGGRAPH submission, we proposed a systematic recipe based on mesh arrangements for conducting a family of exact constructive solid geometry operations. We exhaustively tested our algorithm on 10,000 "wild" models crawled from Thingiverse, a popular online shape repository. Both the code and the dataset are freely available to the public.
-
TR2015-977
2015
Adaptive Selection of Primal Constraints for Isogeometric BDDC Deluxe Preconditioners
Beirão da Veiga, L.;
Pavarino, L. F.; Scacchi, S.; Widlund, O. B.; Zampini, S.
Abstract
|
PDF
Title: Adaptive Selection of Primal Constraints for Isogeometric BDDC Deluxe Preconditioners
Author(s): Beirão da Veiga, L.; Pavarino, L. F.; Scacchi, S.; Widlund, O. B.; Zampini, S.
Abstract:
Isogeometric analysis has been introduced as an alternative to finite element methods in order to simplify the integration of CAD software and the discretization of variational problems of continuum mechanics. In contrast with the finite element case, the basis functions of isogeometric analysis are often not nodal. As a consequence, there are fat interfaces which can easily lead to an increase in the number of interface variables after a decomposition of the parameter space into subdomains. Building on earlier work on the deluxe version of the BDDC family of domain decomposition algorithms, several adaptive algorithms are here developed for scalar elliptic problems in an effort to decrease the dimension of the global, coarse component of these preconditioners. Numerical experiments provide evidence that this work can be successful, yielding scalable and quasi-optimal adaptive BDDC algorithms for isogeometric discretizations.
-
TR2015-979
2015
An Adaptive Choice of Primal Constraints for BDDC Domain Decomposition Algorithms
Calvo, Juan G.;
Widlund, Olof B.
Abstract
|
PDF
Title: An Adaptive Choice of Primal Constraints for BDDC Domain Decomposition Algorithms
Author(s): Calvo, Juan G.; Widlund, Olof B.
Abstract:
An adaptive choice for primal spaces, based on parallel sums, is developed for BDDC deluxe methods and elliptic problems in three dimensions. The primal space, which form the global, coarse part of the domain decomposition algorithm, and which is always required for any competitive algorithm, is defined in terms of generalized eigenvalue problems related to subdomain edges and faces; selected eigenvectors associated to the smallest eigenvalues are used to enhance the primal spaces. This selection can be made automatic by using tolerance parameters specified for the subdomain faces and edges. Numerical results verify the results and provide a comparison with primal spaces commonly used. They include results for cubic subdomains as well as subdomains obtained by a mesh partitioner. Different distributions for the coefficients are also considered, with constant coefficients, highly random values, and channel distributions.
-
TR2015-974
2015
Domain Decomposition Methods for Problems in H(curl)
Calvo, Juan Gabriel
Abstract
|
PDF
Title: Domain Decomposition Methods for Problems in H(curl)
Author(s): Calvo, Juan Gabriel
Abstract:
Two domain decomposition methods for solving vector field problems posed in H(curl) and discretized with Nédélec finite elements are considered. These finite elements are conforming in H(curl).
A two-level overlapping Schwarz algorithm in two dimensions is analyzed, where the subdomains are only assumed to be uniform in the sense of Peter Jones. The coarse space is based on energy minimization and its dimension equals the number of interior subdomain edges. Local direct solvers are based on the overlapping subdomains. The bound for the condition number depends only on a few geometric parameters of the decomposition. This bound is independent of jumps in the coefficients across the interface between the subdomains for most of the different cases considered.
A bound is also obtained for the condition number of a balancing domain decomposition by constraints (BDDC) algorithm in two dimensions, with Jones subdomains. For the primal variable space, a continuity constraint for the tangential average over each interior subdomain edge is imposed. For the averaging operator, a new technique named deluxe scaling is used. The optimal bound is independent of jumps in the coefficients across the interface between the subdomains.
Furthermore, a new coarse function for problems in three dimensions is introduced, with only one degree of freedom per subdomain edge. In all the cases, it is established that the algorithms are scalable. Numerical results that verify the results are provided, including some with subdomains with fractal edges and others obtained by a mesh partitioner. -
Ph.D. Thesis
2015
Big Data Analytics for Development: Events, Knowledge Graphs and Predictive Models
Chakraborty, Sunandan
Abstract
|
PDF
Title: Big Data Analytics for Development: Events, Knowledge Graphs and Predictive Models
Candidate: Chakraborty, Sunandan
Advisor(s): Subramanian, Lakshminarayanan; Nyarko, Yaw
Abstract:
Volatility in critical socio-economic indices can have a significant negative impact on global development. This thesis presents a suite of novel big data analytics algorithms that operate on unstructured Web data streams to automatically infer events, knowledge graphs and predictive models to understand, characterize and predict the volatility of socioeconomic indices.
This thesis makes four important research contributions. First, given a large volume of diverse unstructured news streams, we present new models for capturing events and learning spatio-temporal characteristics of events from news streams. We specifically explore two types of event models in this thesis: one centered around the concept of event triggers and a probabilistic meta-event model that explicitly delineates named entities from text streams to learn a generic class of meta-events. The second contribution focuses on learning several different types of knowledge graphs from news streams and events: a) Spatio-temporal article graphs capture intrinsic relationships between different news articles; b) Event graphs characterize relationships between events and given a news query, provide a succinct summary of a timeline of events relating to a query; c) Event-phenomenon graphs that provide a condensed representation of classes of events that relate to a given phenomena at a given location and time; d) Causality testing on word-word graphs which can capture strong spatio-temporal relationships between word occurrences in news streams; e) Concept graphs that capture relationships between different word concepts that occur in a given text stream.
The third contribution focuses on connecting the different knowledge graph representations and structured time series data corresponding to a socio-economic index to automatically learn event-driven predictive models for the given socio-economic index to predict future volatility. We propose several types of predictive models centered around our two event models: event triggers and probabilistic meta-events. The final contribution focuses on a broad spectrum of inference case studies for different types of socio-economic indices including food prices, stock prices, disease outbreaks and interest rates. Across all these indices, we show that event-driven predictive models provide significant improvements in prediction accuracy over state-of-the-art techniques.
-
Ph.D. Thesis
2015
SMT-Based and Disjunctive Relational Abstract Domains for StaticAnalysis
Chen, Junjie
Abstract
|
PDF
Title: SMT-Based and Disjunctive Relational Abstract Domains for StaticAnalysis
Candidate: Chen, Junjie
Advisor(s): Patrick Cousot
Abstract:
Abstract Interpretation is a theory of sound approximation of program semantics. In recent decades, it has been widely and successfully applied to the static analysis of computer programs. In this thesis, we will work on abstract domains, one of the key concepts in abstract interpretation, which aim at automatically collecting information about the set of all possible values of the program variables. We will focus, in particularly, on two aspects: the combination with theorem provers and the refinement of existing abstract domains.
Satisfiability modulo theories (SMT) solvers are popular theorem provers, which proved to be very powerful tools for checking the satisfiability of first-order logical formulas with respect to some background theories. In the first part of this thesis, we introduce two abstract domains whose elements are logical formulas involving finite conjunctions of affine equalities and finite conjunctions of linear inequalities. These two abstract domains rely on SMT solvers for the computation of transformations and other logical operations.
In the second part of this thesis, we present an abstract domain functor whose elements are binary decision trees. It is parameterized by decision nodes which are a set of boolean tests appearing in the programs and by a numerical or symbolic abstract domain whose elements are the leaves. This new binary decision tree abstract domain functor provides a flexible way of adjusting the cost/precision ratio in path-dependent static analysis.
-
Ph.D. Thesis
2015
Iris: Mitigating Phase Noise in Millimeter Wave OFDM Systems
Dhananjay, Aditya
Abstract
|
PDF
Title: Iris: Mitigating Phase Noise in Millimeter Wave OFDM Systems
Candidate: Dhananjay, Aditya
Advisor(s): Li, Jinyang
Abstract:
Next-generation wireless networks are widely expected to operate over millimeter-wave (mmW) frequencies of over 28GHz. These bands mitigate the acute spectrum shortage in the conventional microwave bands of less than 6GHz. The shorter wavelengths in these bands also allow for building dense antenna arrays on a single chip, thereby enabling various MIMO configurations and highly directional links that can increase the spatial reuse of spectrum.
While attempting to build a practical over-the-air (OTA) link over mmW, we realized that the traditional baseband processing techniques used in the microwave bands simply could not cope with the exacerbated frequency offsets (or phase noise) observed in the RF oscillators at these bands. While the frequency offsets are large, the real difficulty arose from the fact that they varied significantly over very short time-scales.Traditional feedback loop techniques still left significant residual offsets, which in turn led to inter-carrier-interference (ICI). The result was very high symbol error rates (SER).
This thesis presents Iris, a baseband processing block that enables clean mmW links, even in the presence of previously fatal amounts of phase noise. Over real mmW hardware, Iris reduces the SER by one to two orders of magnitude, as compared to competing techniques.
-
Ph.D. Thesis
2015
Predicting Images using Convolutional Networks: Visual Scene Understanding with Pixel Maps
Eigen, David
Abstract
|
PDF
Title: Predicting Images using Convolutional Networks: Visual Scene Understanding with Pixel Maps
Candidate: Eigen, David
Advisor(s): Fergus, Rob
Abstract:
In the greater part of this thesis, we develop a set of convolutional networks that infer predictions at each pixel of an input image. This is a common problem that arises in many computer vision applications: For example, predicting a semantic label at each pixel describes not only the image content, but also fine-grained locations and segmenta- tions; at the same time, finding depth or surface normals provide 3D geometric relations between points. The second part of this thesis investigates convolutional models also in the contexts of classification and unsupervised learning.
To address our main objective, we develop a versatile Multi-Scale Convolutional Network that can be applied to diverse vision problems using simple adaptations, and apply it to predict depth at each pixel, surface normals and semantic labels. Our model uses a series of convolutional network stacks applied at progressively finer scales. The first uses the entire image field of view to predict a spatially coarse set of feature maps based on global relations; subsequent scales correct and refine the output, yielding a high resolution prediction. We look exclusively at depth prediction first, then generalize our method to multiple tasks. Our system achieves state-of-the-art results on all tasks we investigate, and can match many image details without the need for superpixelation.
Leading to our multi-scale network, we also design a purely local convolutional network to remove dirt and raindrops present on a window surface, which learns to identify and inpaint compact corruptions. We also we investigate a weighted nearest-neighbors labeling system applied to superpixels, in which we learn weights for each example, and use local context to find rare class instances.
In addition, we investigate the relative importance of sizing parameters using a recursive convolutional network, finding that network depth is most critical. We also develop a Convolutional LISTA Autoencoder, which learns features similar to stacked sparse coding at a fraction of the cost, combine it with a local entropy objective, and describe a convolutional adaptation of ZCA whitening.
-
TR2015-976
2015
Kmax: Analyzing the Linux Build System
Gazzillo, Paul
Abstract
|
PDF
Title: Kmax: Analyzing the Linux Build System
Author(s): Gazzillo, Paul
Abstract:
Large-scale C software like Linux needs software engineering tools. But such codebases are software product families, with complex build systems that tailor the software with myriad features. This variability management is a challenge for tools, because they need awareness of variability to process all software product lines within the family. With over 14,000 features, processing all of Linux's product lines is infeasible by brute force, and current solutions employ incomplete heuristics. But having the complete set of compilation units with precise variability information is key to static tools such a bug-finders, which could miss critical bugs, and refactoring tools, since behavior-preservation requires a complete view of the software project. Kmax is a new tool for the Linux build system that extracts all compilation units with precise variability information. It processes build system files with a variability-aware make evaluator that stores variables in a conditional symbol table and hoists conditionals around complete statements, while tracking variability information as presence conditions. Kmax is evaluated empirically for correctness and completeness on the Linux kernel. Kmax is compared to previous work for correctness and running time, demonstrating that a complete solution's added complexity incurs only minor latency compared to the incomplete heuristic solutions.
-
Ph.D. Thesis
2015
Unsupervised Feature Learning in Computer Vision
Goroshin, Ross
Abstract
|
PDF
Title: Unsupervised Feature Learning in Computer Vision
Candidate: Goroshin, Ross
Advisor(s): LeCun, Yann
Abstract:
Much of computer vision has been devoted to the question of representation through feature extraction. Ideal features transform raw pixel intensity values to a representation in which common problems such as object identification, tracking, and segmentation are easier to solve. Recently, deep feature hierarchies have proven to be immensely successful at solving many problems in computer vision. In the supervised setting, these hierarchies are trained to solve specific problems by minimizing an objective function of the data and problem specific label information. Recent findings suggest that despite being trained on a specific task, the learned features can be transferred across multiple visual tasks. These findings suggests that there exists a generically useful feature representation for natural visual data.
This work aims to uncover the principles that lead to these generic feature representations in the unsupervised setting, which does not require problem specific label information. We begin by reviewing relevant prior work, particularly the literature on autoencoder networks and energy based learning. We introduce a new regularizer for autoencoders that plays an analogous role to the partition function in probabilistic graphical models. Next we explore the role of specialized encoder architectures for sparse inference. The remainder of the thesis explores visual feature learning from video. We establish a connection between slow-feature learning and metric learning, and experimentally demonstrate that semantically coherent metrics can be learned from natural videos. Finally, we posit that useful features linearize natural image transformations in video. To this end, we introduce a new architecture and loss for training deep feature hierarchies that linearize the transformations observed in unlabeled natural video sequences by learning to predict future frames in the presence of uncertainty.
-
Ph.D. Thesis
2015
Efficient and Trustworthy Theory Solver for Bit-vectors in SatisfiabilityModulo Theories
Hadarean, Liana
Abstract
|
PDF
Title: Efficient and Trustworthy Theory Solver for Bit-vectors in SatisfiabilityModulo Theories
Candidate: Hadarean, Liana
Advisor(s): Barrett, Clark
Abstract:
As software and hardware systems grow in complexity, automated techniques for ensuring their correctness are becoming increasingly important. Many modern formal verification tools rely on back-end satisfiability modulo theories (SMT) solvers to discharge complex verification goals. These goals are usually formalized in one or more fixed first-order logic theories, such as the theory of fixed-width bit-vectors. The theory of bit-vectors offers a natural way of encoding the precise semantics of typical machine operations on binary data. The predominant approach to deciding the bit-vector theory is via eager reduction to propositional logic. While this often works well in practice, it does not scale well as the bit-width and number of operations increase. The first part of this thesis seeks to fill this gap, by exploring efficient techniques of solving bit-vector constraints that leverage the word-level structure. We propose two complementary approaches: an eager approach that takes full advantage of the solving power of off the shelf propositional logic solvers, and a lazy approach that combines on-the-fly algebraic reasoning with efficient propositional logic solvers. In the second part of the thesis, we propose a proof system for encoding automatically checkable refutation proofs in the theory of bit-vectors. These proofs can be automatically generated by the SMT solver, and act as a certificate for the correctness of the result.
-
TR2015-975
2015
A Crop Recommendation Tool for Organic Farmers
Hsu, Jasmine;
Shasha, Dennis
Abstract
|
PDF
Title: A Crop Recommendation Tool for Organic Farmers
Author(s): Hsu, Jasmine; Shasha, Dennis
Abstract:
We describe the data sources and machine learning algorithms that go into the current version of http://www.whatcanifarm.com , a website to help prospective organic farmers determine what to grow given the climate characterized by their zip code.
- M.S. Thesis 2015 Responsive Visualization of Points in Space: Sampling, Clustering, Partitioning Jain, Akshay Abstract | PDF
-
Ph.D. Thesis
2015
Predicting the Market Value of Single-Family Residences
Lowrance, Roy
Abstract
|
PDF
Title: Predicting the Market Value of Single-Family Residences
Candidate: Lowrance, Roy
Advisor(s): LeCun, Yann; Shasha, Dennis
Abstract:
This work develops the best linear model of residential real estate prices for 2003 through 2009 in Los Angeles County. It differs from other studies comparing models for predicting house prices by covering a larger geographic area than most, more houses than most, a longer time period than most, and the time period both before and after the real estate price boom in the United States.
In addition, it open sources all of the software. We test designs for linear models to determine the best form for the model as well as the training period, features, and regularizer that produce the lowest errors. We compare the best of our linear models to random forests and point to directions for further research.
-
Ph.D. Thesis
2015
Building Fast, CPU-Efficient Distributed Systems on Ultra-Low Latency, RDMA-Capable Networks
Mitchell, Christopher
Abstract
|
PDF
Title: Building Fast, CPU-Efficient Distributed Systems on Ultra-Low Latency, RDMA-Capable Networks
Candidate: Mitchell, Christopher
Advisor(s): Li, Jinyang
Abstract:
Modern datacenters utilize traditional Ethernet interconnects to connect hundreds or thousands of machines. Although inexpensive and ubiquitous, Ethernet imposes design constraints on datacenter-scale distributed storage systems that use traditional client-server architectures. Recent technological trends indicate that future datacenters will embrace interconnects with ultra-low latency, high bandwidth, and the ability to offload work from servers to clients. Future datacenter-scale distributed storage systems will need to be designed specifically to exploit these features. This thesis explores what these features mean for large-scale in-memory storage systems, and derives two key insights for building RDMA-aware distributed systems.
First, relaxing locality between data and computation is now practical: data can be copied from servers to clients for computation. Second, selectively relaxing data-computation locality makes it possible to optimally balance load between server and client CPUs to maintain low application latency. This thesis presents two in-memory distributed storage systems built around these two insights, Pilaf and Cell, that demonstrate effective use of ultra-low-latency, RDMA-capable interconnects. Through Pilaf and Cell, this thesis demonstrates that by combining RDMA and message passing to selectively relax locality, systems can achieve ultra-low latency and optimal load balancing with modest CPU resources.
-
TR2015-978
2015
BDDC Algorithm with Deluxe Scaling and Adaptive Selection of Primal Constraints for Raviart-Thomas Vector Fields
Oh, Duk-Soon;
Widlund, Olof B.; Zampini, Stefano; Dohrmann, Clark R.
Abstract
|
PDF
Title: BDDC Algorithm with Deluxe Scaling and Adaptive Selection of Primal Constraints for Raviart-Thomas Vector Fields
Author(s): Oh, Duk-Soon; Widlund, Olof B.; Zampini, Stefano; Dohrmann, Clark R.
Abstract:
A BDDC domain decomposition preconditioner is defined by a coarse component, expressed in terms of primal constraints, a weighted average across the interface between the subdomains, and local components given in terms of solvers of local subdomain problems. BDDC methods for vector field problems discretized with Raviart-Thomas finite elements are introduced. The methods are based on a new type of weighted average and an adaptive selection of primal constraints developed to deal with coefficients with high contrast even inside individual subdomains. For problems with very many subdomains, a third level of the preconditioner is introduced.
Under the assumption that the subdomains are all built from elements of a coarse triangulation of the given domain, and that the material parameters are constant in each subdomain, a bound is obtained for the condition number of the preconditioned linear system which is independent of the values and the jumps of the coefficients across the interface and has a polylogarithmic condition number bound in terms of the number of degrees of freedom of the individual subdomains. Numerical experiments, using the PETSc library, and a large parallel computer, for two and three dimensional problems are also presented which support the theory and show the effectiveness of the algorithms even for problems not covered by the theory. Included are also experiments with a variety of finite element approximations.
-
TR2015-972
2015
Practical SMT-Based Type Error Localization
Pavlinovic, Zvonimir;
Wies, T
Abstract
|
PDF
Title: Practical SMT-Based Type Error Localization
Author(s): Pavlinovic, Zvonimir; Wies, T
Abstract:
Compilers for statically typed functional programming languages are notorious for generating confusing type error messages. When the compiler detects a type error, it typically reports the program location where the type checking failed as the source of the error. Since other error sources are not even considered, the actual root cause is often missed. A more adequate approach is to consider all possible error sources and report the most useful one subject to some usefulness criterion. In our previous work, we showed that this approach can be formulated as an optimization problem related to satisfiability modulo theories (SMT). This formulation cleanly separates the heuristic nature of usefulness criteria from the underlying search problem. Unfortunately, algorithms that search for an optimal error source cannot directly use principal types which are crucial for dealing with the exponential-time complexity of the decision problem of polymorphic type checking. In this paper, we present a new algorithm that efficiently finds an optimal error source in a given ill-typed program. Our algorithm uses an improved SMT encoding to cope with the high complexity of polymorphic typing by iteratively expanding the typing constraints from which principal types are derived. The algorithm preserves the clean separation between the heuristics and the actual search. We have implemented our algorithm for OCaml. In our experimental evaluation, we found that the algorithm reduces the running times for optimal type error localization from minutes to seconds and scales better than previous localization algorithms.
-
Ph.D. Thesis
2015
Instance Segmentation of RGBD Scenes
Silberman, Nathan
Abstract
|
PDF
Title: Instance Segmentation of RGBD Scenes
Candidate: Silberman, Nathan
Advisor(s): Fergus, Rob
Abstract:
The vast majority of literature in scene parsing can be described as semantic pixel labeling or semantic segmentation: predicting the semantic class of the object represented by each pixel in the scene. Our familiar perception of the world, however, provides a far richer representation. Firstly, rather than just being able to predict the semantic class of a location in a scene, humans are able to reason about object instances. Discriminating between a region that might represent a single object versus ten objects is a crucial and basic faculty. Secondly, rather than reasoning about objects as merely occupying the space visible from a single vantage point, we are able to quickly and easily reason about an object's true extent in 3D. Thirdly, rather than viewing a scene as a collection of objects independently existing in space, humans exhibit a representation of scenes that is highly grounded through a intuitive model of physics. Such models allow us to reason about how objects relate physically: via physical support relationships.
Instance segmentation is the task of segmenting a scene into regions which correspond to individual object instances. We argue that this task is not only closer to our own perception of the world than semantic segmentation, but also directly allows for subsequent reasoning about a scenes constituent elements. We explore various strategies for instance segmentation in indoor RGBD scenes.
Firstly, we explore tree-based instance segmentation algorithms. The utility of trees for semantic segmentation has been thoroughly demonstrated and we adapt them to instance segmentation and analyze both greedy and global approaches to inference.
Next, we investigate exemplar-based instance segmentation algorithms, in which a set of representative exemplars are chosen from a large pool of regions and pixels are assigned to exemplars. Inference can either be performed in two stages, exemplar selection followed by pixel-to-exemplar assignment, or in a single joint reasoning stage. We consider the advantages and disadvantages of each approach.
We introduce the task of support-relation prediction in which we predict which objects are physically supporting other objects. We propose an algorithm and a new set of features for performing discriminative support prediction, we demonstrate the effectiveness of our method and compare training mechanisms.
Finally, we introduce an algorithm for inferring scene and object extent. We demonstrate how reasoning about 3D extent can be done by extending known 2D methods and highlight the strengths and limitations of this approach.
-
Ph.D. Thesis
2015
Localization of Humans in Images Using Convolutional Networks
Tompson, Jonathan
Abstract
|
PDF
Title: Localization of Humans in Images Using Convolutional Networks
Candidate: Tompson, Jonathan
Advisor(s): Bregler, Christopher
Abstract:
Tracking of humans in images is a long standing problem in computer vision research for which, despite significant research effort, an adequate solution has not yet emerged. This is largely due to the fact that human body localization is complicated and difficult; potential solutions must find the location of body joints in images with invariance to shape, lighting and texture variation and it must do so in the presence of occlusion and incomplete data. However, despite these significant challenges, this work will present a framework for human body pose localization that not only offers a significant improvement over existing traditional architectures, but has sufficient localization performance and computational efficiency for use in real-world applications.
At it's core, this framework makes use of Convolutional Networks to infer the location of body joints efficiently and accurately. We describe solutions to two applications 1) hand-tracking from a depth image source and 2) human body-tracking from and RGB image source. For both these applications we show that Convolutional Networks are able to significantly out-perform existing state-of-the-art.
We propose a new hybrid architecture that consists of a deep Convolutional Network and a Probabilistic Graphical Model which can exploit structural domain constraints such as geometric relationships between body joint locations to improve tracking performance. We then explore the use of both color and motion features to improve tracking performance. Finally we introduce a novel architecture which includes an efficient ‘position refinement’ model that is trained to estimate the joint offset location within a small region of the image. This refinement model allows our network to improve spatial localization accuracy even with large amounts of spatial pooling.
-
TR2015-973
2015
Acronym Disambiguation
Turtel, Benjamin D.;
Shasha, Dennis
Abstract
|
PDF
Title: Acronym Disambiguation
Author(s): Turtel, Benjamin D.; Shasha, Dennis
Abstract:
Acronym disambiguation is the process of determining the correct expansion of an acronym in a given context. We describe a novel approach for expanding acronyms, by identifying acronym / expansion pairs in a large training corpus of text from Wikipedia and using these as a training dataset to expand acronyms based on word frequencies. On instances in which the correct acronym expansion has at least one instance in our training set (therefore making correct expansion possible), and in which the correct expansion is not the only expansion of an acronym seen in our training set (therefore making the expansion decision a non-trivial decision), we achieve an average accuracy of 88.6%. On a second set of experiments using user-submitted documents, we achieve an average accuracy of 81%.
-
Ph.D. Thesis
2015
Joint Training of a Neural Network and a Structured Model for Computer Vision
Wan, Li
Abstract
|
PDF
Title: Joint Training of a Neural Network and a Structured Model for Computer Vision
Candidate: Wan, Li
Advisor(s): Fergus, Rob
Abstract:
Identifying objects and telling where they are in real world images is one of the most important problems in Artificial Intelligence. The problem is challenging due to: occluded objects, varying object viewpoints and object deformations. This makes the vision problem extremely difficult and cannot be efficiently solved without learning.
This thesis explores hybrid systems that combine a neural network as a trainable feature extractor and structured models that capture high level information such as object parts. The resulting models combine the strengths of the two approaches: a deep neural network which provides a powerful non-linear feature transformation and a high level structured model which integrates domain-specific knowledge. We develop discriminative training algorithms to jointly optimize these entire models end-to-end.
First, we proposed a unified model which combines a deep neural network with a latent topic model for image classification. The hybrid model is shown to outperform models based solely on neural networks or topic model alone. Next, we investigate techniques for training a neural network system, introducing an effective way of regularizing the network called DropConnect. DropConnect allows us to train large models while avoiding over-fitting. This yields state-of-the-art results on a variety of standard benchmarks for image classification. Third, we worked on object detection for PASCAL challenge. We improved the deformable parts model and proposed a new non-maximal suppression algorithm. This system was the joint winner of the 2011 challenge. Finally, we develop a new hybrid model which integrates a deep network, deformable parts model and non-maximal suppression. Joint training of our hybrid model shows clear advantage over train each component individually, and achieving competitive result on standard benchmarks.
-
Ph.D. Thesis
2015
Partition Memory Models in Program Analysis
Wang, Wei
Abstract
|
PDF
Title: Partition Memory Models in Program Analysis
Candidate: Wang, Wei
Advisor(s): Barrett, Clark
Abstract:
Scalability is a key challenge in static program analyses based on solvers for Satisfiability Modulo Theories (SMT). For imperative languages like C, the approach taken for modeling memory can play a significant role in scalability. The main theme of this thesis is using partitioned memory models to divide up memory based on the alias information derived from a points-to analysis.
First, a general analysis framework based on memory partitioning is presented. It incorporates a points-to analysis as a preprocessing step to determine a conservative approximation of which areas of memory may alias or overlap and splits the memory into distinct arrays for each of these areas.
Then we propose a new cell-based field-sensitive points-to analysis, which is an extension of Steensgaard's unification-based algorithms. A cell is a unit of access with scalar or record type. Arrays and dynamically memory allocations are viewed as a collection of cells. We show how our points-to analysis yields more precise alias information for programs with complex heap data structures.
Our work is implemented in Cascade, a static analysis framework for C programs. It replaces the former at memory model that models the memory as a single array of bytes. We show that the partitioned memory models achieve better scalability within Cascade, and the cell-based memory model, in particular, improves the performance significantly, making Cascade a state-of-the-art C analyzer.
-
Ph.D. Thesis
2014
On the Human Form: Efficient acquisition, modeling and manipulation of thehuman body
Braga, Otavio
Abstract
|
PDF
Title: On the Human Form: Efficient acquisition, modeling and manipulation of thehuman body
Candidate: Braga, Otavio
Advisor(s): Geiger, Davi
Abstract:
This thesis concerns the acquisition, modeling and manipulation of the human form.
First, we acquire body models. We introduce an efficient bootstraped algorithm that we employed to register over 2,000 high resolution body scans of male and female adult subjects. Our algorithm outputs not only the traditional vertex correspondences, but also directly produces a high quality model which can be immediately deformed. We then employ the result to fit noisy depth maps coming from now commercially available 3D sensors such as Microsoft's Kinect and PrimeSense's Carmine.
We conclude by describing a new real-time system for image-based body manipulation called BodyJam, that lets you change your outfit with a finger snap. BodyJam is inspired by a technique invented by the surrealists a century ago: "Exquisite corpse", a method by which a collection of images (of body parts) is collectively assembled. BodyJam does it on a video display that mirrors the pose in real-time of a real-person standing in front of the camera/display mirror, and allows the user to change clothes and other appearance attributes. Using Microsoft's Kinect, poses are matched to a video database of different torsos and legs, and "pages" showing different clothes are turned by handwitch focus to the topic of body manipulation. We first revisit the more traditional way of specifying bodies from a set of measurements, such as coming from clothing sizing charts, showing how the statistics of the population learned during the registration can aid us in accurately defining the body shape. We then introduce a new manipulation metaphor, where we navigate through the space of body shapes and poses by directly dragging the body mesh surface.
We conclude by describing a new real-time system for image-based body manipulation called BodyJam, that lets you change your outfit with a finger snap. BodyJam is inspired by a technique invented by the surrealists a century ago: "Exquisite Corpse", a method by which a collection of images (of body parts) is collectively assembled. BodyJam does it on a video display that mirrors the pose in real-time of a real-person standing in front of the camera/display mirror, and allows the user to change clothes and other appearance attributes. Using Microsoft's Kinect, poses are matched to a video database of different torsos and legs, and "pages" showing different clothes are turned by hand gestures.
-
TR2014-969
2014
Overlapping Schwarz Algorithms for Almost Incompressible Linear Elasticity
Cai, Mingchao;
Pavarino, Luca F.; Widlund, Olof B.
Abstract
|
PDF
Title: Overlapping Schwarz Algorithms for Almost Incompressible Linear Elasticity
Author(s): Cai, Mingchao; Pavarino, Luca F.; Widlund, Olof B.
Abstract:
Low order finite element discretizations of the linear elasticity system suffer increasingly from locking effects and ill-conditioning, when the material approaches the incompressible limit, if only the displacement variable are used. Mixed finite elements using both displacement and pressure variables provide a well-known remedy, but they yield larger and indefinite discrete systems for which the design of scalable and efficient iterative solvers is challenging. Two-level overlapping Schwarz preconditioner for the almost incompressible system of linear elasticity, discretized by mixed finite elements with discontinuous pressures, are constructed and analyzed. The preconditioned systems are accelerated either by a GMRES (generalized minimum residual) method applied to the resulting discrete saddle point problem or by a PCG (preconditioned conjugate gradient) method applied to a positive definite, although extremely ill-conditioned, reformulation of the problem obtained by eliminating all pressure variables on the element level. A novel theoretical analysis of the algorithm for the positive definite reformulation is given by extending some earlier results by Dohrmann and Widlund. The main result of the paper is a bound on the condition number of the algorithm which is cubic in the relative overlap and grows logarithmically with the number of elements across individual subdomains but is otherwise independent of the number of subdomains, their diameters and mesh sizes, and the incompressibility of the material and possible discontinuities of the material parameters across the subdomain interfaces. Numerical results in the plane confirm the theory and also indicate that an analogous result should hold for the saddle point formulation, as well as for spectral element discretizations.
-
TR2014-965
2014
A BDDC algorithm with deluxe scaling for H(curl) in two dimensions with irregular subdomains
Calvo, Juan G.
Abstract
|
PDF
Title: A BDDC algorithm with deluxe scaling for H(curl) in two dimensions with irregular subdomains
Author(s): Calvo, Juan G.
Abstract:
A bound is obtained for the condition number of a BDDC algorithm for problems posed in H(curl) in two dimensions, where the subdomains are only assumed to be uniform in the sense of Peter Jones. For the primal variable space, a continuity constraint for the tangential average over each interior subdomain edge is imposed.
For the averaging operator, a new technique named deluxe scaling is used. Our bound is independent of jumps in the coefficients across the interface between the subdomains and depends only on a few geometric parameters of the decomposition. Numerical results that verify the result are shown, including some with subdomains with fractal edges and others obtained by a mesh partitioner.
-
TR2014-968
2014
A two-level overlapping Schwarz method for H(curl) in two dimensions with irregular subdomains
Calvo, Juan G.
Abstract
|
PDF
Title: A two-level overlapping Schwarz method for H(curl) in two dimensions with irregular subdomains
Author(s): Calvo, Juan G.
Abstract:
A bound is obtained for the condition number of a two-level overlapping Schwarz algorithm for problems posed in H(curl) in two dimensions, where the subdomains are only assumed to be uniform in the sense of Peter Jones. The coarse space is based on energy minimization and its dimension equals the number of interior subdomain edges. Local direct solvers are used on the overlapping subdomains. Our bound depends only on a few geometric parameters of the decomposition. This bound is independent of jumps in the coefficients across the interface between the subdomains for most of the different cases considered. Numerical experiments that verify the result are shown, including some with subdomains with fractal edges and others obtained by a mesh partitioner.
-
Ph.D. Thesis
2014
Analyzing Tatonnement Dynamics in Economic Markets
Cheung, Yun Kuen
Abstract
|
PDF
Title: Analyzing Tatonnement Dynamics in Economic Markets
Candidate: Cheung, Yun Kuen
Advisor(s): Cole, Richard
Abstract:
The impetus for this dissertation is to explain why well-functioning markets might be able to stay at or near a market equilibrium. We argue that tatonnement, a natural, simple and distributed price update dynamic in economic markets, is a plausible candidate to explain how markets might reach their equilibria.
Tatonnement is broadly defined as follows: if the demand for a good is more than the supply, increase the price of the good, and conversely, decrease the price when the demand is less than the supply. Prior works show that tatonnement converges to market equilibrium in some markets while it fails to converge in other markets. Our goal is to extend the classes of markets in which tatonnement is shown to converge. The prior positive results largely concerned markets with substitute goods. We seek market constraints which enable tatonnement to converge in markets with complementary goods, or with a mixture of substitutes and complementary goods. We also show fast convergence rates for some of these markets.
We introduce an amortized analysis technique to handle asynchronous events - in our case asynchronous price updates. On the other hand, for some markets we show that tatonnement is equivalent to generalized gradient descent (GGD). The amortized analysis and our analysis on GGD may be of independent interests.
-
TR2014-964
2014
A BDDC algorithm with deluxe scaling for three-dimensional H(curl) problems
Dohrmann, Clark R.;
Widlund, Olof B.
Abstract
|
PDF
Title: A BDDC algorithm with deluxe scaling for three-dimensional H(curl) problems
Author(s): Dohrmann, Clark R.; Widlund, Olof B.
Abstract:
In this paper, we present and analyze a BDDC algorithm for a class of elliptic problems in the three-dimensional H(curl) space. Compared with existing results, our condition number estimate requires fewer assumptions and also involves two fewer powers of log(H/h), making it consistent with optimal estimates for other elliptic problems. Here, H/h is the maximum of H i /h i over all subdomains, where H i and h i are the diameter and the smallest element diameter for the subdomain Ω i .
The analysis makes use of two recent developments. The first is a new approach to averaging across the subdomain interfaces, while the second is a new technical tool which allows arguments involving trace classes to be avoided. Numerical examples are presented to confirm the theory and demonstrate the importance of the new averaging approach in certain cases.
-
Ph.D. Thesis
2014
Low-latency Image Recognition withGPU-accelerated Convolutional Networksfor Web-based Services
Huang, Fu Jie
Abstract
|
PDF
Title: Low-latency Image Recognition withGPU-accelerated Convolutional Networksfor Web-based Services
Candidate: Huang, Fu Jie
Advisor(s): LeCun, Yann
Abstract:
In this work, we describe an application of convolutional networks to object classification and detection in images. The task of image based object recognition is surveyed in the first chapter. Its application in internet advertisement is one of the main motivations of this work.
The architecture of the convolutional networks is described in details in the following chapter. Stochastic gradient descent is used to train the networks.
We then describe the data collection and labelling process. The set of training data labelled basically decides what kind of recognizer is being built. Four binary classifers are trained for the object types of sailboat, car, motorbike, and dog.
GPU based massive parallel implementation of the convolutional networks is built. This enables us to run the convolution operations at close to 40 times faster than running on a traditional CPU. Details about how to implement the convolutional operation on NVIDIA GPUs using CUDA is disscused.
In order to apply the object recognizer in a production environment where millions of images are processed daily, we have built a platform with cloud computing. We describe how large scale and low latency image processing can be achieved with such a system. -
Ph.D. Thesis
2014
Effective Algorithms for the Satisfiability of Quantifier-Free Formulas Over Linear Real and Integer Arithmetic
King, Tim
Abstract
|
PDF
Title: Effective Algorithms for the Satisfiability of Quantifier-Free Formulas Over Linear Real and Integer Arithmetic
Candidate: King, Tim
Advisor(s): Barrett, Clark
Abstract:
A core technique of modern tools for formally reasoning about computing systems is generating and dispatching queries to automated theorem provers, including Satisfiability Modulo Theories (SMT) provers. SMT provers aim at the tight integration of decision procedures for propositional satisfiability and decision procedures for fixed first-order theories ‒ known as theory solvers. This thesis presents several advancements in the design and implementation of theory solvers for quantifier-free linear real, integer, and mixed integer and real arithmetic. These are implemented within the SMT system CVC4. We begin by formally describing the Satisfiability Modulo Theories problem and the role of theory solvers within CVC4. We discuss known techniques for building solvers for quantifier-free linear real, integer, and mixed integer and real arithmetic around the Simplex for DPLL(T) algorithm. We give several small improvements to theory solvers using this algorithm and describe the implementation and theory of this algorithm in detail. To extend the class of problems that the theory solver can robustly support, we borrow and adapt several techniques from linear programming (LP) and mixed integer programming (MIP) solvers which come from the tradition of optimization. We propose a new decicion procedure for quantifier-free linear real arithmetic that replaces the Simplex for DPLL(T) algorithm with a variant of the Simplex algorithm that performs a form of optimization ‒ minimizing the sum of infeasibilties. In this thesis, we additionally describe techniques for leveraging LP and MIP solvers to improve the performance of SMT solvers without compromising correctness. Previous efforts to leverage such solvers in the context of SMT have concluded that in addition to being potentially unsound, such solvers are too heavyweight to compete in the context of SMT. We present an empirical comparison against other state-of-the-art SMT tools to demonstrate the effectiveness of the proposed solutions.
-
TR2014-966
2014
Local temporal reasoning
Koskinen, Eric
Abstract
|
PDF
Title: Local temporal reasoning
Author(s): Koskinen, Eric
Abstract:
We present the first method for reasoning about temporal logic properties of higher-order, infinite-data programs. By distinguishing between the finite traces and infinite traces in the specification, we obtain rules that permit us to reason about the temporal behavior of program parts via a type-and-effect system, which is then able to compose these facts together to prove the overall target property of the program. The type system alone is strong enough to derive many temporal safety properties using refinement types and temporal effects. We also show how existing techniques can be used as oracles to provide liveness information (e.g. termination) about program parts and that the type-and-effect system can combine this information with temporal safety information to derive nontrivial temporal properties. Our work has application toward verification of higher-order software, as well as modular strategies for procedural programs.
-
TR2014-967
2014
The Push/Pull model of transactions
Koskinen, Eric;
Parkinson, Matthew
Abstract
|
PDF
Title: The Push/Pull model of transactions
Author(s): Koskinen, Eric; Parkinson, Matthew
Abstract:
We present a general theory of serializability, unifying a wide range of transactional algorithms, including some that are yet to come. To this end, we provide a compact semantics in which concurrent transactions push their effects into the shared view (or unpush to recall effects) and pull the effects of potentially uncommitted concurrent transactions into their local view (or unpull to detangle). Each operation comes with simple side-conditions given in terms of commutativity (Lipton's left-movers and right-movers).
The benefit of this model is that most of the elaborate reasoning (coinduction, simulation, subtle invariants, etc.) necessary for proving the serializability of a transactional algorithm is already proved within the semantic model. Thus, proving serializability (or opacity) amounts simply to mapping the algorithm on to our rules, and showing that it satisfies the rules' side-conditions.
-
Ph.D. Thesis
2014
Cryptographic Algorithms for the SecureDelegation of Multiparty Computation
Lopez-Alt, Adriana
Abstract
|
PDF
Title: Cryptographic Algorithms for the SecureDelegation of Multiparty Computation
Candidate: Lopez-Alt, Adriana
Advisor(s): Dodis, Yevgeniy
Abstract:
In today’s world, we store our data and perform expensive computations remotely on powerful servers (a.k.a. “the cloud”) rather than on our local devices. In this dissertation we study the question of achieving cryptographic security in the setting where multiple (mutually distrusting) clients wish to delegate the computation of a joint function on their inputs to an untrusted cloud, while keeping these inputs private. We introduce two frameworks for modeling such protocols.
- The first, called cloud-assisted multiparty computation (cloud-assisted MPC), builds on the standard notion of MPC to incorporate the concept of delegation. In particular, since the cloud is expected to perform the computation of the function, our definition requires the communication complexity of the protocol, as well as the computation time of all clients to be (essentially) independent of the complexity of the function.
- The second, called on-the-fly MPC, builds on the notion of cloud-assisted MPC and further requires that the clients be involved only when initially uploading their input to the cloud, and in a final phase when outputs are revealed. In particular, this allows the server to dynamically choose functions (and subsets of data on which to evaluate these functions) “on- the-fly”, and evaluate them without requiring any interaction with the clients. The only interaction required takes place in the final phase after the computation has been completed, when the clients must retroactively approve both the chosen functions, and the subsets of data upon which these functions were evaluated.
We construct cloud-assisted and on-the-fly MPC protocols using fully homomorphic encryption (FHE). However, FHE requires inputs to be encrypted under the same key; we extend it to the multiparty setting in two ways:
- We introduce the notion of threshold FHE : fully homomorphic encryption that allows the clients to jointly generate a common public key (whose corresponding secret key is shared among them), as well as decrypt a ciphertext under this public key without learning any- thing but the plaintext. Using threshold FHE, we show how to construct an efficient cloud- assisted MPC protocol. We construct threshold FHE using (a modification of) the Brakerski- Vaikuntanathan (ring-based) FHE scheme; however our ideas extend to many other lattice- based FHE schemes in the literature.
- We introduce the notion of multikey FHE : fully homomorphic encryption that allows the cloud to perform homomorphic evaluation on ciphertexts encrypted under different and independent keys. We show a construction of on-the-fly MPC using multikey FHE, and construct a multikey FHE scheme based on NTRU encryption. We highlight that it was previously not known how to make NTRU fully homomorphic, even for a single key. Therefore, we view the construction of (multikey) FHE from NTRU encryption as a main contribution of independent interest.
-
M.S. Thesis
2014
Resolution-Exact Planner for a 2-link Planar Robot using Soft Predicates
Luo, Zhongdi
Abstract
|
PDF
Title: Resolution-Exact Planner for a 2-link Planar Robot using Soft Predicates
Candidate: Luo, Zhongdi
Advisor(s): Yap, Chee
Abstract:
Motion planning is a major topic in robotics. It frequently refers to motion of a robot in a R 2 or R 3 world that contains obstacles. Our goal is to produce algorithms that are practical and have strong theoretical guarantees. Recently, a new framework Soft Subdivision Search (SSS) was introduced to solve various motion planning problems. It is based on soft predicates and a new notion of correctness called resolution-exactness. Unlike most theoretical algorithms, such algorithms can be implemented without exact computation. In this thesis we describe a detailed, realized algorithm of SSS for a 2-link robot in R 2 . We prove the correctness of our predicates and also do experimental study of several strategies to enhance the basic SSS algorithm. In particular, we introduce a technique called T/R Splitting, in which the splittings of the rotational degrees of freedom are deferred to the end. The results give strong evidence of the practicability of SSS.
-
Ph.D. Thesis
2014
Robust and Efficient Methods for Approximation and Optimization of Stability Measures
Mitchell, Tim
Abstract
|
PDF
Title: Robust and Efficient Methods for Approximation and Optimization of Stability Measures
Candidate: Mitchell, Tim
Advisor(s): Overton, Michael
Abstract:
We consider two new algorithms with practical application to the problem of designing controllers for linear dynamical systems with input and output: a new spectral value set based algorithm called hybrid expansion-contraction intended for approximating the H-infinity norm, or equivalently, the complex stability radius, of large-scale systems, and a new BFGS SQP based optimization method for nonsmooth, nonconvex constrained optimization motivated by multi-objective controller design. In comprehensive numerical experiments, we show that both algorithms in their respect domains are significantly faster and more robust compared to other available alternatives. Moreover, we present convergence guarantees for hybrid expansion-contraction, proving that it converges at least superlinearly, and observe that it converges quadratically in practice, and typically to good approximations to the H-infinity norm, for problems which we can verify this. We also extend the hybrid expansion-contraction algorithm to the real stability radius, a measure which is known to be more difficult to compute than the complex stability radius. Finally, for the purposes of comparing multiple optimization methods, we present a new visualization tool called relative minimization profiles that allow for simultaneously assessing the relative performance of algorithms with respect to three important performance characteristics, highlighting how these measures interrelate to one another and compare to the other competing algorithms on heterogenous test sets. We employ relative minimization profiles to empirically validate our proposed BFGS SQP method in terms of quality of minimization, attaining feasibility, and speed of progress compared to other available methods on challenging test sets comprised of nonsmooth, nonconvex constrained optimization problems arising in controller design.
-
Ph.D. Thesis
2014
Building Efficient Distributed In-memory Systems
Power, Russell
Abstract
|
PDF
Title: Building Efficient Distributed In-memory Systems
Candidate: Power, Russell
Advisor(s): Li, Jinyang
Abstract:
The recent cloud computing revolution has changed the distributed computing landscape, making the resources of entire datacenters available to ordinary users. This process has been greatly aided by dataflow style frameworks such as MapReduce which expose simple model for programs, allowing for efficient, fault-tolerant execution across many machines. While the MapReduce model has proved to be effective for many applications, there are a wide class of applications which are difficult to write or inefficient in such a model. This includes many familiar and important applications such as PageRank, matrix factorization and a number of machine learning algorithms. In lieu of a good framework for building these applications, users resort to writing applications using MPI or RPC, a difficult and error-prone construction.
This thesis presents 2 complementary frameworks, Piccolo and Spartan, which help programmers to write in-memory distributed applications not served well by existing approaches.
Piccolo presents a new data-centric programming model for in-memory applications. Unlike data-flow models, Piccolo allows programs running on different machines to share distributed, mutable state via a key-value table interface. This design allows for both high-performance and additional flexibility. Piccolo makes novel use of commutative updates to efficiently resolve write-write conflicts. We find Piccolo provides an efficient backend for a wide-range of applications: from PageRank and matrix multiplication to web-crawling.
While Piccolo provides an efficient backend for distributed computation, it can still be some- what cumbersome to write programs using it directly. To address this, we created Spartan. Spartan implements a distributed implementation of the NumPy array language, and fully sup- ports important array language features such as spatial indexing (slicing), fancy indexing and broadcasting. A key feature of Spartan is its use of a small number of simple, powerful high-level operators to provide most functionality. Not only do these operators dramatically simplify the design and implementation of Spartan, they also allow users to implement new functionality with ease.
We evaluate Piccolo and Spartan on a wide range of applications and find that they both perform significantly better than existing approaches.
-
TR2014-971
2014
VerifiableAuction: An Auction System for a Suspicious World
Rosenberg, Michael;
Shasha, Dennis
Abstract
|
PDF
Title: VerifiableAuction: An Auction System for a Suspicious World
Author(s): Rosenberg, Michael; Shasha, Dennis
Abstract:
This paper presents a cryptosystem that will allow for fair first-price sealed-bid auctions among groups of individuals to be conducted over the internet without the need for a trusted third party. A client who maintains the secrecy of his or her private key will be able to keep his/her bid secret from the server and from all other clients until this client explicitly decides to reveal his/her bid, which will be after all clients publish their obfuscated bids. Each client will be able to verify that every other client's revealed bid corresponds to that client's obfuscated bid at the end of each auction. Each client is provided with a transcript of all auction proceedings so that they may be independently audited.
-
Ph.D. Thesis
2014
Runtime Compilation of Array-Oriented Python Programs
Rubinsteyn, Alex
Abstract
|
PDF
Title: Runtime Compilation of Array-Oriented Python Programs
Candidate: Rubinsteyn, Alex
Advisor(s): Shasha, Dennis
Abstract:
The Python programming language has become a popular platform for data analysis and scientific computing. To mitigate the poor performance of Python's standard interpreter, numerically intensive computations are typically offloaded to library functions written in languages such as Fortran or C. If, however, some algorithm does not have an existing low-level implementation, then the scientific programmer must either accept sub-standard performance (sometimes orders of magnitude slower than native code) or themselves implement the desired functionality in a less productive but more efficient language.
To alleviate this problem, this thesis present Parakeet, a runtime compiler for an array-oriented subset of Python. Parakeet does not replace the Python interpreter, but rather selectively augments it by compiling and executing functions explicitly marked by the programmer. Parakeet uses runtime type specialization to eliminate the performance-defeating dynamicism of untyped Python code. Parakeet's pervasive use of data parallel operators as a means for implementing array operations enables high-level restructuring optimization and compilation to parallel hardware such as multi-core CPUs and graphics processors. We evaluate Parakeet on a collection of numerical benchmarks and demonstrate its dramatic capacity for accelerating array-oriented Python programs.
-
Ph.D. Thesis
2014
A Deep Learning Pipeline for Image Understanding and Acoustic Modeling
Sermanet, Pierre
Abstract
|
PDF
Title: A Deep Learning Pipeline for Image Understanding and Acoustic Modeling
Candidate: Sermanet, Pierre
Advisor(s): LeCun, Yann
Abstract:
One of the biggest challenges artificial intelligence faces is making sense of the real world through sensory signals such as audio or video. Noisy inputs, varying object viewpoints, deformations and lighting conditions turn it into a high-dimensional problem which cannot be efficiently solved without learning from data.
This thesis explores a general way of learning from high dimensional data (video, images, audio, text, financial data, etc.) called deep learning. It strives on the increasingly large amounts of data available to learn robust and invariant internal features in a hierarchical manner directly from the raw signals.
We propose an unified pipeline for feature learning, recognition, localization and detection using Convolutional Networks (ConvNets) that can obtain state-of-the-art accuracy on a number of pattern recognition tasks, including acoustic modeling for speech recognition and object recognition in computer vision. ConvNets are particularly well suited for learning from continuous signals in terms of both accuracy and efficiency.
Additionally, a novel and general deep learning approach to detection is proposed and successfully demonstrated on the most challenging vision datasets. We then generalize it to other modalities such as speech data. This approach allows accurate localization and detection objects in images or phones in voice signals by learning to predict boundaries from internal representations. We extend the reach of deep learning from classification to detection tasks in an integrated fashion by learning multiple tasks using a single deep model. This work is among the first to outperform human vision and establishes a new state of the art on some computer vision and speech recognition benchmarks.
-
Ph.D. Thesis
2014
Towards New Interfaces For Pedagogy
Stein, Murphy
Abstract
|
PDF
Title: Towards New Interfaces For Pedagogy
Candidate: Stein, Murphy
Advisor(s): Perlin, Ken
Abstract:
Developing technology to help people teach and learn is an important topic in Human Computer Interaction (HCI).
In this thesis we present three studies on this topic. In the first study, we demonstrate new games for learning mathematics and discuss the evidence for key design decisions from user studies. In the second study, we develop a real-time video compositing system for distance education and share evidence for its potential value compared to standard techniques from two user studies. In the third study, we demonstrate our markerless hand tracking interface for real-time 3D manipulation and explain its advantages compared to other state-of-the-art methods.
A data-driven methodology is applied intensively throughout the course of this study. Several paraphrase corpora are constructed using automatic techniques, experts and crowdsourcing platforms. Paraphrase systems are trained and evaluated by using these data as a cornerstone. We show that even with a very noisy or a relatively small amount of parallel training data, it is possible to learn paraphrase models which capture linguistic phenomena. This work expands the scope of paraphrase studies to targeting different language variations, and more potential applications, such as text normalization and domain adaptation.
-
Ph.D. Thesis
2014
Computational Complexity Implicationsof Secure Coin-Flipping
Tentes, Aristeidis
Abstract
|
PDF
Title: Computational Complexity Implicationsof Secure Coin-Flipping
Candidate: Tentes, Aristeidis
Advisor(s): Dodis, Yevgeniy
Abstract:
Modern Cryptography is based on computational intractability assumptions, e.g., Factoring, Discrete Logarithm, Diffie-Helman etc. However, since an assumption might be proven incorrect, there has been a lot of focus in order to construct cryptographic primitives based on the possibly most minimal assumption. The most popular minimal assumption, which is implied by the existence of almost all cryptographic primitives, is the existence of One Way Functions. Coin-Flipping protocols are known to be implied by One-Way Functions, however, a complete characterization of the inverse direction is not known. There was even speculation that weak notions of Coin Flipping Protocols might be strictly weaker than One Way Functions. In this thesis we show that even very weak notions of Coin Flipping protocols do imply One Way Functions. In particular we show that the existence of a coin-flipping protocol safe against any non-trivial constant bias (e.g 0.499) implies the existence of One Way Functions. This improves upon a recent result of Haitner and Omri [FOCS '11], who proved this implication for protocols with bias 0.207. Unlike the former result, our result also holds for weak coin-flipping protocols.
-
TR2014-963
2014
On Automating Separation Logic with Trees and Data
Wies, Thomas
Abstract
|
PDF
Title: On Automating Separation Logic with Trees and Data
Author(s): Wies, Thomas
Abstract:
Separation logic (SL) is a widely used formalism for verifying heap manipulating programs. Existing SL solvers focus on decidable fragments for list-like structures. More complex data structures such as trees are typically unsupported in implementations, or handled by incomplete heuristics.
While complete decision procedures for reasoning about trees have been proposed, these procedures suffer from high complexity, or make global assumptions about the heap that contradict the separation logic philosophy of local reasoning. In this paper, we present a fragment of classical first-order logic for local reasoning about tree-like data structures. The logic is decidable in NP and the decision procedure allows for combinations with other decidable first-order theories for reasoning about data. Such extensions are essential for proving functional correctness properties.
We have implemented our decision procedure and, building on earlier work on translating SL proof obligations into classical logic, integrated it into an SL-based verification tool. We successfully used the tool to verify functional correctness of tree-based data structure implementations. -
Ph.D. Thesis
2014
Data-driven Approaches for Paraphrasing across Language Variations
Xu, Wei
Abstract
|
PDF
Title: Data-driven Approaches for Paraphrasing across Language Variations
Candidate: Xu, Wei
Advisor(s): Grishman, Ralph
Abstract:
Our language changes very rapidly, accompanying political, social and cultural trends, as well as the evolution of science and technology. The Internet, especially the social media, has accelerated this process of change. This poses a severe challenge for both human beings and natural language processing (NLP) systems, which usually only model a snapshot of language presented in the form of text corpora within a certain domain and time frame.
While much previous effort has investigated monolingual paraphrase and bilingual translation, we focus on modeling meaning-preserving transformations between variants of a single language. We use Shakespearean and Internet language as examples to investigate various aspects of this new paraphrase problem, including acquisition, generation, detection and evaluation.
A data-driven methodology is applied intensively throughout the course of this study. Several paraphrase corpora are constructed using automatic techniques, experts and crowdsourcing platforms. Paraphrase systems are trained and evaluated by using these data as a cornerstone. We show that even with a very noisy or a relatively small amount of parallel training data, it is possible to learn paraphrase models which capture linguistic phenomena. This work expands the scope of paraphrase studies to targeting different language variations, and more potential applications, such as text normalization and domain adaptation.
-
Ph.D. Thesis
2014
Positive-Unlabeled Learning in the Context of Protein Function Prediction
Youngs, Noah
Abstract
|
PDF
Title: Positive-Unlabeled Learning in the Context of Protein Function Prediction
Candidate: Youngs, Noah
Advisor(s): Shasha, Dennis
Abstract:
With the recent proliferation of large, unlabeled data sets, a particular subclass of semisupervised learning problems has become more prevalent. Known as positiveunlabeled learning (PU learning), this scenario provides only positive labeled examples, usually just a small fraction of the entire dataset, with the remaining examples unknown and thus potentially belonging to either the positive or negative class. Since the vast majority of traditional machine learning classifiers require both positive and negative examples in the training set, a new class of algorithms has been developed to deal with PU learning problems.
A canonical example of this scenario is topic labeling of a large corpus of documents. Once the size of a corpus reaches into the thousands, it becomes largely infeasible to have a curator read even a sizable fraction of the documents, and annotate them with topics. In addition, the entire set of topics may not be known, or may change over time, making it impossible for a curator to annotate which documents are NOT about certain topics. Thus a machine learning algorithm needs to be able to learn from a small set of positive examples, without knowledge of the negative class, and knowing that the unlabeled training examples may contain an arbitrary number of additional but as yet unknown positive examples. Another example of a PU learning scenario recently garnering attention is the protein function prediction problem (PFP problem).
While the number of organisms with fully sequenced genomes continues to grow, the progress of annotating those sequences with the biological functions that they perform lags far behind. Machine learning methods have already been successfully applied to this problem, but with many organisms having a small number of positive annotated training examples, and the lack of availability of almost any labeled negative examples, PU learning algorithms can make large gains in predictive performance.
The first part of this dissertation motivates the protein function prediction problem, explores previous work, and introduces novel methods that improve upon previously reported benchmarks for a particular type of learning algorithm, known as Gaussian Random Field Label Propagation (GRFLP). In addition, we present improvements to the computational efficiency of the GRFLP algorithm, and a modification to the traditional structure of the PFP learning problem that allows for simultaneous prediction across multiple species.
The second part of the dissertation focuses specifically on the positive-unlabeled aspects of the PFP problem. Two novel algorithms are presented, and rigorously compared to existing PU learning techniques in the context of protein function prediction. Additionally, we take a step back and examine some of the theoretical considerations of the PU scenario in general, and provide an additional novel algorithm applicable in any PU context. This algorithm is tailored for situations in which the labeled positive examples are a small fraction of the set of true positive examples, and where the labeling process may be subject to some type of bias rather than being a random selection of true positives (arguably some of the most difficult PU learning scenarios).
The third and fourth sections return to the PFP problem, examining the power of tertiary structure as a predictor of protein function, as well as presenting two case studies of function prediction performance on novel benchmarks. Lastly, we conclude with several promising avenues of future research into both PU learning in general, and the protein function prediction problem specifically.
-
Ph.D. Thesis
2014
Hierarchical Convolutional Deep Learning in Computer Vision
Zeiler, Matthew
Abstract
|
PDF
Title: Hierarchical Convolutional Deep Learning in Computer Vision
Candidate: Zeiler, Matthew
Advisor(s): Fergus, Rob
Abstract:
It has long been the goal in computer vision to learn a hierarchy of features useful for object recognition. Spanning the two traditional paradigms of machine learning, unsupervised and supervised learning, we investigate the application of deep learning methods to tackle this challenging task and to learn robust representations of images.
We begin our investigation with the introduction of a novel unsupervised learning technique called deconvolutional networks. Based on convolutional sparse coding, we show this model learns interesting decompositions of images into parts without object label information. This method, which easily scales to large images, becomes increasingly invariant by learning multiple layers of feature extraction coupled with pooling layers. We introduce a novel pooling method called Gaussian pooling to enable these layers to store continuous location information while being differentiable, creating a unified objective function to optimize.
In the supervised learning domain, a well-established model for recognition of objects is the convolutional network. We introduce a new regularization method for convolutional networks called stochastic pooling which relies on sampling noise to prevent these powerful models from overfitting. Additionally, we show novel visualizations of these complex models to better understand what they learn and to provide insight on how to develop state-of-the-art architectures for large-scale classification of 1,000 different object categories.
We also investigate some other related problems in deep learning. First, we introduce a model for the task of mapping one high dimensional time series sequence onto another. Second, we address the choice of nonlinearity in neural networks, showing evidence that rectified linear units outperform others types in automatic speech recognition. Finally, we introduce a novel optimization method called ADADELTA which shows promising convergence speeds in practice while being robust to hyper-parameter selection.
-
TR2013-955
2013
Isogeometric BDDC Preconditioners with Deluxe Scaling
Beirao Da Veiga, Lourenco;
Pavarino, Luca; Scacchi, Simone; Widlund, Olof; Zampni, Stefano
Abstract
|
PDF
Title: Isogeometric BDDC Preconditioners with Deluxe Scaling
Author(s): Beirao Da Veiga, Lourenco; Pavarino, Luca; Scacchi, Simone; Widlund, Olof; Zampni, Stefano
Abstract:
A BDDC (Balancing Domain Decomposition by Constraints) preconditioner with a novel scaling, introduced by Dohrmann for problems with more than one variable coeffcient and here denoted as deluxe scaling, is extended to Isogeometric Analysis of scalar elliptic problems. This new scaling turns out to be more powerful than the standard rho- and stiffness scalings considered in a previous isogeometric BDDC study. Our h-analysis shows that the condition number of the resulting deluxe BDDC preconditioner is scalable with a quasi-optimal polylogarithmic bound which is also independent of coeffcient discontinuities across subdomain interfaces. Extensive numerical experiments support the theory and show that the deluxe scaling yields a remarkable improvement over the older scalings, in particular, for large isogeometric polynomial degree and high regularity.
-
M.S. Thesis
2013
An Efficient Active Learning Framework for New Relation Types
Fu, Lisheng
Abstract
|
PDF
Title: An Efficient Active Learning Framework for New Relation Types
Candidate: Fu, Lisheng
Advisor(s): Grishman, Ralph; Davis, Ernest
Abstract:
Relation extraction is a fundamental task in information extraction. Different methods have been studied for building a relation extraction system. Supervised training of models for this task has yielded good performance, but at substantial cost for the annotation of large training corpora (About 40K same-sentence entity pairs). Semi-supervised methods can only require a seed set, but the performance is very limited when the seed set is very small, which is not very satisfactory for real relation extraction applications. The trade-off of annotation and performance is also hard to decide in practice. Active learning strategies allow users to gradually improve the model and to achieve comparable performance to supervised methods with limited annotation. Recent study shows active learning on this task needs much fewer labels for each type to build a useful relation extraction application. We feel active learning is a good direction to do relation extraction and presents a more efficient active learning framework. This framework starts from a better balance between positive and negative samples, and boosts by interleaving self-training and co-testing. We also studied the reduction of annotation cost by enforcing argument type constraints. Experiments show a substantial speed-up by comparison to previous state-of-the-art pure co-testing active learning framework. We obtain reasonable performance with only a hundred labels for individual ACE 2004 relation types. We also developed a GUI tool for real human-in-the-loop active learning trials. The goal of building relation extraction systems in a very short time seems to be promising.
-
Ph.D. Thesis
2013
Incentive-Centered Design of Money-Free Mechanisms
Gkatzelis, Vasilis
Abstract
|
PDF
Title: Incentive-Centered Design of Money-Free Mechanisms
Candidate: Gkatzelis, Vasilis
Advisor(s): Cole, Richard
Abstract:
This thesis serves as a step toward a better understanding of how to design fair and efficient multiagent resource allocation systems by bringing the incentives of the participating agents to the center of the design process. As the quality of these systems critically depends on the ways in which the participants interact with each other and with the system, an ill-designed set of incentives can lead to severe inefficiencies. The special focus of this work is on the problems that arise when the use of monetary exchanges between the system and the participants is prohibited. This is a common restriction that substantially complicates the designer's task; we nevertheless provide a sequence of positive results in the form of mechanisms that maximize efficiency or fairness despite the possibly self-interested behavior of the participating agents.
The first part of this work is a contribution to the literature on approximate mechanism design without money. Given a set of divisible resources, our goal is to design a mechanism that allocates them among the agents. The main complication here is due to the fact that the agents' preferences over different allocations of these resources may not be known to the system. Therefore, the mechanism needs to be designed in such a way that it is in the best interest of every agent to report the truth about her preferences; since monetary rewards and penalties cannot be used in order to elicit the truth, a much more delicate regulation of the resource allocation is necessary. Our contribution mostly revolves around a new truthful mechanism that we propose, which we call the /Partial Allocation/ mechanism. We first show how to use the two-agent version of this mechanism to create a system with the best currently known worst-case efficiency guarantees for problem instances involving two agents. We then consider fairness measures and prove that the general version of this elegant mechanism yields surprisingly good approximation guarantees for the classic problem of fair division. More specifically, we use the well established solution of /Proportional Fairness/ as a benchmark and we show that for an arbitrary number of agents and resources, and for a very large class of agent preferences, our mechanism provides /every agent/ with a value close to her proportionally fair value. We complement these results by also studying the limits of truthful money-free mechanisms, and by providing other mechanisms for special classes of problem instances. Finally, we uncover interesting connections between our mechanism and the Vickrey-Clarke-Groves mechanism from the literature on mechanism design with money.
The second part of this work concerns the design of money-free resource allocation mechanisms for /decentralized/ multiagent systems. As the world has become increasingly interconnected, such systems are using more and more resources that are geographically dispersed; in order to provide scalability in these systems, the mechanisms need to be decentralized. That is, the allocation decisions for any given resource should not assume global information regarding the system's resources or participants. We approach this restriction by using /coordination mechanisms/: a collection of simple resource allocation policies, each of which controls only one of the resources and uses only local information regarding the state of the system. The system's participants, facing these policies, have the option of choosing which resources they will access. We study a variety of coordination mechanisms and we prove that the social welfare of any equilibrium of the games that these mechanisms induce is a good approximation of the optimal welfare. Once again, we complement our positive results by studying the limits of coordination mechanisms. We also provide a detailed explanation of the seemingly counter-intuitive incentives that some of these mechanisms yield. Finally, we use this understanding in order to design a combinatorial constant-factor approximation algorithm for maximizing the social welfare, thus providing evidence that a game-theoretic mindset can lead to novel optimization algorithms.
-
TR2013-957
2013
Tight Lower Bound on the Probability of a Binomial Exceeding its Expectation
Greenberg, Spencer;
Mohri, Mehryar
Abstract
|
PDF
Title: Tight Lower Bound on the Probability of a Binomial Exceeding its Expectation
Author(s): Greenberg, Spencer; Mohri, Mehryar
Abstract:
We give the proof of a tight lower bound on the probability that a binomial random variable exceeds its expected value. The inequality plays an important role in a variety of contexts, including the analysis of relative deviation bounds in learning theory and generalization bounds for unbounded loss functions.
-
Ph.D. Thesis
2013
Locality Optimization for Data Parallel Programs
Hielscher, Eric
Abstract
|
PDF
Title: Locality Optimization for Data Parallel Programs
Candidate: Hielscher, Eric
Advisor(s): Shasha, Dennis
Abstract:
Productivity languages such as NumPy and Matlab make it much easier to implement data-intensive numerical algorithms than it is to implement them in efficiency languages such as C++. This is important as many programmers (1) aren't expert programmers; or (2) don't have time to tune their software for performance, as their main job focus is not programming per se. The tradeoff is typically one of execution time versus programming time, as unless there are specialized library functions or precompiled primitives for your particular task a productivity language is likely to be orders of magnitude slower than an efficiency language.
In this thesis, we present Parakeet, an array-oriented language embedded within Python, a widely-used productivity language. The Parakeet just-in-time compiler dynamically translates whole user functions to high performance multi-threaded native code. This thesis focuses in particular on our use of data parallel operators as a basis for locality enhancing program optimizations. e transform Parakeet programs written with the classic data parallel operators (Map, Reduce, and Scan; in Parakeet these are called adverbs) to process small local pieces (called tiles) of data at a time. To express this locality we introduce three new adverbs: TiledMap, TiledReduce, and TiledScan. These tiled adverbs are not exposed to the programmer but rather are automatically generated by a tiling transformation.
We use this tiling algorithm to bring two classic locality optimizations to a data parallel setting: cache tiling, and register tiling. We set register tile sizes statically at compile time, but use an online autotuning search to find good cache tile sizes at runtime. We evaluate Parakeet and these optimizations on various benchmark programs, and exhibit excellent performance even compared to typical C implementations.
-
TR2013-960
2013
Diet Planner: Finding a Nutritionally Sound Diet While Following (Most) of a Dieter’s Desires
Jermsurawong, Mick Jermsak;
Shasha, Dennis
Abstract
|
PDF
Title: Diet Planner: Finding a Nutritionally Sound Diet While Following (Most) of a Dieter’s Desires
Author(s): Jermsurawong, Mick Jermsak; Shasha, Dennis
Abstract:
We describe the design and implementation of a diet website currently housed at http://nutrientdata.herokuapp.com/. The site allows users or dieticians to enter nutritional constraints (e.g. at least this much calcium but not more than that amount of calcium) and objectives (e.g. minimize calories), a list of foods/brands the person likes. The site then determines, if possible, the quantities of at least some of those desired foods that would meet the nutritional constraints. If not possible, then the site guides the user in the choice of other foods that may meet the nutritional constraints. The net result is a tailored diet measured in servings.
-
TR2013-952
2013
A General Method for Energy-Error Tradeoffs in Approximate Adders
Kedem, Zvi;
Muntimadugu, Kirthi Krishna
Abstract
|
PDF
Title: A General Method for Energy-Error Tradeoffs in Approximate Adders
Author(s): Kedem, Zvi; Muntimadugu, Kirthi Krishna
Abstract:
Approximate adders are adders with conventional architectures run in an overclocked mode. With this mode, erroneous sums may be produced at the savings of energy required to execute the computation. The results presented in this report lead to a procedure for allocating the available energy budgets among the adders modules so as to minimize the expected error. For simplicity, only the uniform distribution of the inputs is considered.
-
TR2013-956
2013
Reversibility of Turing Machine Computations
Kedem, Zvi M.
Abstract
|
PDF
Title: Reversibility of Turing Machine Computations
Author(s): Kedem, Zvi M.
Abstract:
Since Bennett's 1973 seminal paper, there has been a growing interest in general-purpose, reversible computations and they have been studied using both mathematical and physical models. Following Bennett, given a terminating computation of a deterministic Turing Machine, one may be interested in constructing a new Turing Machine, whose computation consists of two stages. The first stage emulates the original Turing Machine computation on its working tape, while also producing the trace of the computation on a new history tape. The second stage reverses the first stage using the trace information. Ideally, one would want the second stage to traverse whole-machine states in the reverse order from that traversed in the first stage. But this is impossible other than for trivial computations. Bennett constructs the second phase by using additional controller states, beyond those used during the first stage. In this report, a construction of the new machine is presented in which the second stage uses the same and only those controller states that the first stage used and they are traversed in the reverse order. The sole element that is not fully reversed is the position of the head on the history tape, where it is out of phase by one square compared to the first stage.
-
Ph.D. Thesis
2013
Piecewise Smooth Surfaces with Features
Kovacs, Denis
Abstract
|
PDF
Title: Piecewise Smooth Surfaces with Features
Candidate: Kovacs, Denis
Advisor(s): Zorin, Denis
Abstract:
The creation, manipulation and display of piecewise smooth surfaces has been a fundamental topic in computer graphics since its inception. The applications range from highest-quality surfaces for manufacturing in CAD, to believable animations of virtual creatures in Special Effects, to virtual worlds rendered in real-time in computer games.
Our focus is on improving the a) mathematical representation and b) automatic construction of such surfaces from finely sampled meshes in the presence of features. Features can be areas of higher geometric detail in an otherwise smooth area of the mesh, or sharp creases that contrast the overall smooth appearance of an object.
In the first part, we build on techniques that define piecewise smooth surfaces, to improve their quality in the presence of features. We present a crease technique suitable for real-time applications that helps increases the perceived visual detail of objects that are required to be very compactly represented and efficiently evaluated.
We then introduce a new subdivision scheme that allows the use of T-junctions for better local refinement. It thus reduces the need for extraordinary vertices, which can cause surface artifacts especially on animated objects.
In the second part, we consider the problem of how to build the control meshes of piecewise smooth surfaces, in a way that the resulting surface closely approximates an existing data set (such as a 3D range scan), particularly in the presence of features. To this end, we introduce a simple modification that can be applied to a wide range of parameterization techniques to obtain an anisotropic parameterization. We show that a resulting quadrangulation can indeed better approximate the original surface. Finally, we present a quadrangulation scheme that turns a data set into a quad mesh with T-junctions, which we then use as a T-Spline control mesh to obtain a smooth surface. -
Ph.D. Thesis
2013
Low-level Image Priors and Laplacian Preconditioners for Applications in Computer Graphics and Computational Photography
Krishnan, Dilip
Abstract
|
PDF
Title: Low-level Image Priors and Laplacian Preconditioners for Applications in Computer Graphics and Computational Photography
Candidate: Krishnan, Dilip
Advisor(s): Fergus, Rob
Abstract:
In the first part of this thesis, we develop novel image priors and efficient algorithms for image denoising and deconvolution applications. Our priors and algorithms enable fast, high-quality restoration of images corrupted by noise or blur. In the second part, we develop effective preconditioners for Laplacian matrices. Such matrices arise in a number of computer graphics and computational photography problems such as image colorization, tone mapping and geodesic distance computation on 3D meshes.
The first prior we develop is a spectral prior which models correlations between different spectral bands. We introduce a prototype camera and flash system, used in conjunction with the spectral prior, to enable taking photographs at very low light levels. Our second prior is a sparsity-based measure for blind image deconvolution. This prior gives lower costs to sharp images than blurred ones, enabling the use simple and efficient Maximum a-Posteriori algorithms.
We develop a new algorithm for the non-blind deconvolution problem. This enables extremely fast deconvolution of images blurred by a known blur kernel. Our algorithm uses Fast Fourier Transforms and Lookup Tables to achieve real-time deconvolution performance with non convex gradient-based priors. Finally, for certain image restoration problems with no clear formation model, we demonstrate how learning a direct mapping between original/corrupted patch pairs enables effective restoration.
We develop multi-level preconditioners to solve discrete Poisson equations. Existing multilevel preconditioners have two major drawbacks: excessive bandwidth growth at coarse levels; and the inability to adapt to problems with highly varying coefficients. Our approach tackles both these problems by introducing sparsification and compensation steps at each level. We interleave the selection of fine and coarse-level variables with the removal of weak connections between potential fine-level variables (sparsification) and compensate for these changes by strengthening nearby connections. By applying these operations before each elimination step and repeating the procedure recursively on the resulting smaller systems, we obtain highly efficient schemes. The construction is linear in time and memory. Numerical experiments demonstrate that our new schemes outperform state of the art methods, both in terms of operation count and wall-clock time, over a range of 2D and 3D problems.
-
TR2013-958
2013
A Balancing Domain Decomposition By Constraints Deluxe Method For Numerically Thin Reissner-Mindlin Plates Approximated With Falk-tu Finite Elements
Lee, Jong Ho
Abstract
|
PDF
Title: A Balancing Domain Decomposition By Constraints Deluxe Method For Numerically Thin Reissner-Mindlin Plates Approximated With Falk-tu Finite Elements
Author(s): Lee, Jong Ho
Abstract:
The Reissner-Mindlin plate models thin plates. The condition numbers of finite element approximations of these plate models increase very rapidly as the thickness of the plate goes to 0. A Balancing Domain Decomposition by Constraints (BDDC) De Luxe method is developed for these plate problems discretized by Falk-Tu finite elements. In this new algorithm, subdomain Schur complements restricted to individual edges are used to define the average operator for the BDDC De Luxe method. It is established that the condition number of this preconditioned iterative method is bounded by C(1 + log (H/h))^2 if t, the thickness of the plate, is on the order of the element size h or smaller; H is the maximum diameter of the subdomains. The constant C is independent of the thickness t as well as H and h. Numerical results, which verify the theory, and a comparison with a traditional BDDC method are also provided.
-
TR2013-962
2013
Cryptographic Security of Macaroon Authorization Credentials
Lopez-Alt, Adriana
Abstract
|
PDF
Title: Cryptographic Security of Macaroon Authorization Credentials
Author(s): Lopez-Alt, Adriana
Abstract:
Macaroons, recently introduced by Birgisson et al., are authorization credentials that provide support for controlled sharing in decentralized systems. Macaroons are similar to cookies in that they are bearer credentials, but unlike cookies, macaroons include caveats that attenuate and contextually confine when, where, by who, and for what purpose authorization should be granted.
In this work, we formally study the cryptographic security of macaroons. We define macaroon schemes, introduce corresponding security definitions and provide several constructions. In particular, the MAC-based and certificate-based constructions outlined by Birgisson et al., can be seen as instantiations of our definitions. We also present a new construction that is privately-verifiable (similar to the MAC-based construction) but where the verifying party does not learn the intermediate keys of the macaroon, a problem already observed by Birgisson et al.
We also formalize the notion of a protocol for "discharging" third-party caveats and present a security definition for such a protocol. The encryption-based protocol outlined by Birgisson et al. can be seen as an instantiation of our definition, and we also present a new signature-based construction.
Finally, we formally prove the security of all constructions in the given security models. -
Ph.D. Thesis
2013
Relation Extraction with Weak Supervision and Distributional Semantics
Min, Bonan
Abstract
|
PDF
Title: Relation Extraction with Weak Supervision and Distributional Semantics
Candidate: Min, Bonan
Advisor(s): Grishman, Ralph
Abstract:
Relation Extraction aims at detecting and categorizing semantic relations between pairs of entities in unstructured text. It benefits an enormous number of applications such as Web search and Question Answering. Traditional approaches for relation extraction either rely on learning from a large number of accurate human-labeled examples or pattern matching with hand-crafted rules. These resources are very laborious to obtain and can only be applied to a narrow set of target types of interest.
This talk focuses on learning relations with little or no human supervision. First, we examine the approach that treats relation extraction as a supervised learning problem. We develop an algorithm that is able to train a model with approximately 1/3 of the human-annotation cost and that matches the performance of models trained with high-quality annotation. Second, we investigate distant supervision, a weakly supervised algorithm that automatically generates its own labeled training data. We develop a latent Bayesian framework for this purpose. By using a model which provides a better approximation of the weak source of supervision, it outperforms the state-of-the-art methods. Finally, we investigate the possibility of building all relational tables beforehand with an unsupervised relation extraction algorithm. We develop an effective yet efficient algorithm that combines the power of various semantic resources that are automatically mined from a corpus based on distributional semantics. The algorithm is able to extract a very large set of relations from the web at high precision.
-
TR2013-950
2013
Foundations of a Formal Theory of Time Travel
Morgenstern, Leora
Abstract
|
PDF
Title: Foundations of a Formal Theory of Time Travel
Author(s): Morgenstern, Leora
Abstract:
Although the phenomenon of time travel is common in popular culture, there has been little work in AI on developing a formal theory of time travel. This paper develops such a theory. The paper introduces a branching-time ontology that maintains the classical restriction of forward movement through a temporal tree structure, but permits the representation of paths in which one can perform inferences about time-travel scenarios. Central to the ontology is the notion of an agent embodiment whose beliefs are equivalent to those of an agent who has time-traveled from the future. We show how to formalize an example scenario and demonstrate what it means for such a scenario to be motivated with respect to an agent embodiment.
-
TR2013-951
2013
A BDDC Algorithm for Raviart-Thomas Vector Fields
Oh, Duk-Soon;
Widlund, Olof B.; Dohrmann, Clark R.
Abstract
|
PDF
Title: A BDDC Algorithm for Raviart-Thomas Vector Fields
Author(s): Oh, Duk-Soon; Widlund, Olof B.; Dohrmann, Clark R.
Abstract:
A BDDC preconditioner is defined by a coarse component, expressed in terms of primal constraints and a weighted average across the interface between the subdomains, and local components given in terms of Schur complements of local subdomain problems. A BDDC method for vector field problems discretized with Raviart-Thomas finite elements is introduced. Our method is based on a new type of weighted average developed to deal with more than one variable coefficient. A bound on the condition number of the preconditioned linear system is also provided which is independent of the values and jumps of the coefficients across the interface and has a polylogarithmic condition number bound in terms of the number of degrees of freedom of the individual subdomains. Numerical experiments for two and three dimensional problems are also presented, which support the theory and show the effectiveness of our algorithm even for certain problems not covered by our theory.
-
Ph.D. Thesis
2013
Usable Security Mechanisms in the Developing World
Paik, Michael
Abstract
|
PDF
Title: Usable Security Mechanisms in the Developing World
Candidate: Paik, Michael
Advisor(s): Subramanian; Lakshminarayanan
Abstract:
Security and privacy are increasingly important in our interconnected world. Cybercrimes, including identity theft, phishing, and other attacks, are on the rise, and computer-assisted crimes such as theft and stalking are becoming commonplace.
Contemporary with this trend is the uptake of technology in the developing world, proceeding at a pace often outstripping that of the developed world. Penetration of mobile phones and services such as healthcare delivery, mobile money, and social networking is higher than that of even amenities like electricity. Connectivity is empowering disenfranchised people, providing information and services to the heretofore disconnected poor.
There are efforts to use technology to enhance physical security and well-being in the developing world, including citizen journalism, education, improving drug security, attendance tracking, etc.
However, there are significant challenges to security both in the digital and the physical domains that are particular to these contexts. Infrastructure is constrained, literacy, numeracy, and familiarity with basic technologies cannot be assumed, and environments are harsh on hardware. These circumstances often prevent security best practices from being transplanted directly to these regions â in many ways, the adoption of technology has overtaken the users ability to use it safely, and their trust in it is oftentimes reater than it should be.
This dissertation describes several systems and methodologies designed to operate in the developing world, using technologies and metaphors that are familiar to users and that are robust against the operating environments.
It begins with an overview of the state of affairs, and several threat models. It continues with a description of Signet, a method to use SIM cards as trusted computing hardware to provide secure signed receipts. Next, Epothecary describes a low-infrastructure system for tracking pharmaceuticals that also significantly and asymmetrically increases costs for counterfeiters. The balance consists of a description of a low-cost Biometric Terminal currently in use by NGOs in India performing DOTS-based tuberculosis treatment, Blacknoise, an investigation into the use of low-cost cameraphones with noisy imaging sensors for image-based steganography, and finally Innoculous, a low-cost, crowdsourcing system for combating the spread of computer viruses, particularly among non-networked computers, while also collecting valuable "epidemiological" data.
-
TR2013-954
2013
Automating Separation Logic Using SMT
Piskac, Ruzica;
Wies, Thomas; Zufferey, Damien
Abstract
|
PDF
Title: Automating Separation Logic Using SMT
Author(s): Piskac, Ruzica; Wies, Thomas; Zufferey, Damien
Abstract:
Separation logic (SL) has gained widespread popularity because of its ability to succinctly express complex invariants of a program's heap configurations. Several specialized provers have been developed for decidable SL fragments. However, these provers cannot be easily extended or combined with solvers for other theories that are important in program verification, e.g., linear arithmetic. In this paper, we present a reduction of decidable SL fragments to a decidable first-order theory that fits well into the satisfiability modulo theories (SMT) framework. We show how to use this reduction to automate satisfiability, entailment, frame inference, and abduction problems for separafion logic using SMT solvers. Our approach provides a simple method of integrating separation logic into existing verification tools that provide SMT backends, and an elegant way of combining SL fragments with other decidable first-order theories. We implemented this approach in a verification tool and applied it to heap-manipulating programs whose verification involves reasoning in theory combinations.
-
Ph.D. Thesis
2013
Inapproximability Reductions and Integrality Gaps
Popat, Preyas
Abstract
|
PDF
Title: Inapproximability Reductions and Integrality Gaps
Candidate: Popat, Preyas
Advisor(s): Khot, Subhash
Abstract:
In this thesis we prove intractability results for several well studied problems in combinatorial optimization.
Closest Vector Problem with Preprocessing (CVPP): We show that the preprocessing version of the well known Closest Vector Problem is hard to approximate to an almost polynomial factor unless NP is in quasi polynomial time. The approximability of CVPP is closely related to the security of lattice based cryptosystems.
Pricing Loss Leaders: We show hardness of approximation results for the problem of maximizing profit from buyers with single minded valuations where each buyer is interested in bundles of at most k items, and the items are allowed to have negative prices ("Loss Leaders"). For k = 2, we show that assuming the Unique Games Conjecture, it is hard to approximate the profit to any constant factor. For k > 2, we show the same result assuming P != N P.
Integrality gaps: We show SemiDefinite Programming (SDP) integrality gaps for Unique Games and 2 to 1 Games. Inapproximability results for these problems imply inapproximability results for many fundamental optimization problems. For the first problem, we show "approximate" integrality gaps for super constant rounds of the powerful Lasserre hierarchy. For the second problem we show integrality gaps for the basic SDP relaxation with perfect completeness.
-
Ph.D. Thesis
2013
Natural Interaction with a Virtual World
Rosenberg, Ilya
Abstract
|
PDF
Title: Natural Interaction with a Virtual World
Candidate: Rosenberg, Ilya
Advisor(s): Perlin, Ken
Abstract:
A large portion of computer graphics and human/computer interaction is concerned with the creation, manipulation and use of two and three dimensional objects existing in a virtual world. By creating more natural physical interfaces and virtual worlds which behave in physically plausible ways, it is possible to empower nonexpert users to create, work and play in virtual environments. This thesis is concerned with the design, creation, and optimization of user-input devices which break down the barriers between the real and the virtual as well as the development of software algorithms which allow for the creation of physically realistic virtual worlds.
-
M.S. Thesis
2013
Parsing and Analyzing POSIX API behavior on different platforms
Savvides, Savvas
Abstract
|
PDF
Title: Parsing and Analyzing POSIX API behavior on different platforms
Candidate: Savvides, Savvas
Advisor(s): Cappos, Justin; Li, Jinyang
Abstract:
Because of the increased variety of operating systems and architectures, developing applications that are supported by multiple platforms has become a cumbersome task. To mitigate this problem, many portable APIs were created which are responsible for hiding the details of the underlying layers of the system and they provide a universal interface on top of which applications can be built. Many times it is necessary to examine the interactions between an application and an API, either to check that the behavior of these interactions is the expected one or to confirm that this behavior is the same across platforms. In this thesis, the POSIX Omni Tracer tool is described. POSIX Omni Tracer provides an easy way to analyze the behavior of the POSIX API under various platforms. The behavior of the POSIX API can be captured in traces during an application�۪s execution using various tracing utilities. These traces are then parsed into a uniform representation. Since the captured behavior from different platforms share the same format, they can be easily analyzed or compared between them.
-
Ph.D. Thesis
2013
Security Mechanisms for Physical Authentication
Sharma, Ashlesh
Abstract
|
PDF
Title: Security Mechanisms for Physical Authentication
Candidate: Sharma, Ashlesh
Advisor(s): Subramanian; Lakshminarayanan
Abstract:
Counterfeiting of goods is a worldwide problem where the losses are in billions of dollars. It is estimated that 10% of all the world trade is counterfeit. To alleviate counterfeiting, a number of techniques are used from barcodes to holograms. But these technologies are easily reproducible and hence they are ineffective against counterfeiters.
In this thesis, we introduce PaperSpeckle, a novel way to fingerprint any piece of paper based on its unique microscopic properties. Next, we extend and generalize this work to introduce TextureSpeckle, a novel way to fingerprint and characterize the uniqueness of the surface of a material based on the interaction of light with the natural randomness present in the rough structure at the microscopic level of the surface. We show the existence and uniqueness of these fingerprints by analyzing a large number of surfaces (over 20,000 microscopic surfaces and 200 million pairwise comparisons) of different materials. We also define the entropy of the fingerprints and show how each surface can be uniquely identified in a robust manner even in case of damage.
From a theoretical perspective, we consider a discrete approximation model from light scattering theory which allows us to compute the speckle pattern for a given surface. Under this computational model, we show that given a speckle pattern, it is computationally hard to reconstruct the physical surface characteristics by simulating the multiple scattering of light. Using TextureSpeckle as a security primitive, we design secure protocols to enable a variety of scenarios such as: i) supply chain security, where applications range from drug tracking to inventory management, ii) mobile based secure transfer of money (mobile money), where any paper can be changed to an on-demand currency, and iii) fingerprint ecosystem, a cloud based system, where any physical object can be identified and authenticated on-demand.
We discuss the construction of the prototype device ranging from optical lens design to usability aspects and show how our technique can be applied in the real world to alleviate counterfeiting and forgery. In addition, we introduce Pattern Matching Puzzles (PMPs), a usable security mechanism that provides a 'human computable' one-time-MAC (message authentication code) for every transaction,making each transaction information-theoretically secure against various adversarial attacks. The puzzles are easy tosolve even for semi-literate users with simple pattern recognition skills.
-
TR2013-953
2013
Online Machine Learning Algorithms For Currency Exchange Prediction
Soulas, Eleftherios;
Shasha, Dennis
Abstract
|
PDF
Title: Online Machine Learning Algorithms For Currency Exchange Prediction
Author(s): Soulas, Eleftherios; Shasha, Dennis
Abstract:
Using Machine Learning Algorithms to analyze and predict security price patterns is an area of active interest. Most practical stock traders combine computational tools with their intuitions and knowledge to make decisions.
- This technical report describes methods for two problems:
- 1. How to find highly correlated pairs of securities over the last recent time period (e.g. over the last hour) in a sliding window fashion. The base model used for this is Statstream.
- 2. How to predict foreign exchange rate changes in an online fashion, updated over time.
This document explains the algorithms and discusses various metrics of accuracy. It validates the models by applying the model to a real-life trading price stream. Though it is very hard to replace the expertise of an experienced trader, software like this may enhance the trader's performance.
-
Ph.D. Thesis
2013
Augmenting Information Flow for Visual Privacy
Spiro, Ian
Abstract
|
PDF
Title: Augmenting Information Flow for Visual Privacy
Candidate: Spiro, Ian
Advisor(s): Bregler, Christopher
Abstract:
In the Information Age, visual media take on powerful new forms. Photographs once printed on paper and stored in physical albums now exist as digital files. With the rise of social media, photo data has moved to the cloud for rapid dissemination. The upside can be measured in terms of increased efficiency, greater reach, or reduced printing costs. But there is a downside that is harder to quantify: the risk of private photos or videos leaking inappropriately. Human imagery is potentially sensitive, revealing private details of a persons body, lifestyle, activities, and more. Images create visceral responses and have the potential to permanently damage a persons reputation.
We employed the theory of contextual integrity to explore privacy aspects of transmitting the human form. In response to privacy threats from new sociotechnical systems, we developed practical solutions that have the potential to restore balance. The main work is a set of client-side, technical interventions that can be used to alter information flows and provide features to support visual privacy. In the first approach, we use crowdsourcing to extract specific, useful human signal from video to decouple it from bundled identity information. The second approach is an attempt to achieve similar ends with pure software. Instead of using information workers, we developed a series of filters that alter video to hide identity information while still revealing motion signal. The final approach is an attempt to control the recipients of photos by encoding them in the visual channel. The software completely protects data from third-parties who lack proper credentials and maintains data integrity by exploiting the visual coherence of uploaded images, even in the face of JPEG compression. The software offers end-to-end encryption that is compatible with existing social media applications.
-
Ph.D. Thesis
2013
Toward a computational solution to the inverse problem of how hypoxia arises in metabolically heterogeneous cancer cell populations
Sundstrom, Andrew
Abstract
|
PDF
Title: Toward a computational solution to the inverse problem of how hypoxia arises in metabolically heterogeneous cancer cell populations
Candidate: Sundstrom, Andrew
Advisor(s): Mishra, Bud; Bar-Sagi, Dafna
Abstract:
As a tumor grows, it rapidly outstrips its blood supply, leaving portions of tumor that undergo hypoxia. Hypoxia is strongly correlated with poor prognosis as it renders tumors less responsive to chemotherapy and radiotherapy. During hypoxia, HIFs upregulate production of glycolysis enzymes and VEGF, thereby promoting metabolic heterogeneity and angiogenesis, and proving to be directly instrumental in tumor progression. Prolonged hypoxia leads to necrosis, which in turn activates inflammatory responses that produce cytokines that stimulate tumor growth. Hypoxic tumor cells interact with macrophages and fibroblasts, both involved with inflammatory processes tied to tumor progression. So it is of clinical and theoretical significance to understand: Under what conditions does hypoxia arise in a heterogeneous cell population? Our aim is to transform this biological origins problem into a computational inverse problem, and then attack it using approaches from computer science. First, we develop a minimal, stochastic, spatiotemporal simulation of large heterogeneous cell populations interacting in three dimensions. The simulation can manifest stable localized regions of hypoxia. Second, we employ and develop a variety of algorithms to analyze histological images of hypoxia in xenographed colorectal tumors, and extract features to construct a spatiotemporal logical characterization of hypoxia. We also consider characterizing hypoxia by a linear regression functional learning mechanism that yields a similarity score. Third, we employ a Bayesian statistical model checking algorithm that can determine, over some bounded number of simulation executions, whether hypoxia is likely to emerge under some fixed set of simulation parameters, and some fixed logical or functional description of hypoxia. Driving the model checking process is one of three adaptive Monte Carlo sampling algorithms we developed to explore the high dimensional space of simulation initial conditions and operational parameters. Taken together, these three system components formulate a novel approach to the inverse problem above, and constitute a design for a tool that can be placed into the hands of experimentalists, for testing hypotheses based upon known parameter values or ones the tool might discover. In principle, this design can be generalized to other biological phenomena involving large heterogeneous populations of interacting cells.
-
M.S. Thesis
2013
PhyloBrowser: A visual tool to explore phylogenetic trees
Tershakovec, Tamara
Abstract
|
PDF
Title: PhyloBrowser: A visual tool to explore phylogenetic trees
Candidate: Tershakovec, Tamara
Advisor(s): Shasha, Dennis; Coruzzi, Gloria
Abstract:
Primary acknowledgements go to my research advisor, Dennis Shasha, for his patient and unwavering support. I would also like to thank my second reader, Gloria Coruzzi, for encouraging and showcasing my work. Kranthi Varala helped immensely in explaining the biological and statistical concepts involved in this project. The Virtual Plant team got me started in biological data visualization and was a joy to work with. And finally, thanks to Chris Poultney, who put me in touch with Professor Shasha and started me on my way back to NYU.
-
Ph.D. Thesis
2013
Rethinking Information Privacy for the Web
Tierney, Matthew
Abstract
|
PDF
Title: Rethinking Information Privacy for the Web
Candidate: Tierney, Matthew
Advisor(s): Subramanian; Lakshminarayanan
Abstract:
In response to Supreme Court Justice Samuel Alitoâs opinion that society should accept a decline in personal privacy with modern technology, Hanni M. Fakhoury, staff attorney with the Electronic Frontier Foundation, argued âTechnology doesnât involve an âinevitableâ tradeoff [of increased convenience] with privacy. The only inevitability must be the demand that privacy be a value built into our technologyâ [42]. Our position resonates with Mr. Fakhouryâs. In this thesis, we present three artifacts that address the balance between usability, efficiency, and privacy as we rethink information privacy for the web.
In the first part of this thesis, we present the design, implementation and evaluation of Cryptagram, a system designed to enhance online photo privacy. Cryptagram enables users to convert photos into encrypted images, which the users upload to Online Social Networks (OSNs). Users directly manage access control to those photos via shared keys that are independent of OSNs or other third parties. OSNs apply standard image transformations (JPEG compression) to all uploaded images so Cryptagram provides image encoding and encryption protocols that are tolerant to these transformations. Cryptagram guarantees that the recipient with the right credentials can completely retrieve the original image from the transformed version of the uploaded encrypted image while the OSN cannot infer the original image. Cryptagramâs browser extension integrates seamlessly with preexisting OSNs, including Facebook and Google+, and currently has over 400 active users.
In the second part of this thesis, we present the design and implementation of Lockbox, a system designed to provide end-to-end private file-sharing with the convenience of Google Drive or Dropbox. Lockbox uniquely combines two important design points: (1) a federated system for detecting and recovering from server equivocation and (2) a hybrid cryptosystem over delta encoded data to balance storage and bandwidth costs with efficiency for syncing end-user data. To facilitate appropriate use of public keys in the hybrid cryptosystem, we integrate a service that we call KeyNet, which is a web service designed to leverage existing authentication media (e.g., OAuth, verified email addresses) to improve the usability of public key cryptography.
In the third part of this thesis, we present the design of Compass, which realizes the philosophical privacy framework of contextual integrity (CI) as a full OSN design. CI), which we believe better captures users privacy expectations in OSNs. In Compass, three properties hold: (a) users are associated with roles in specific contexts; (b) every piece of information posted by a user is associated with a specific context; (c) norms defined on roles and attributes of posts in a context govern how information is shared across users within that context. Given the definition of a context and its corresponding norm set, we describe the design of a compiler that converts the human-readable norm definitions to generate appropriate information flow verification logic including: (a) a compact binary decision diagram for the norm set; and (b) access control code that evaluates how a new post to a context will flow. We have implemented a prototype that shows how the philosophical framework of contextual integrity can be realized in practice to achieve strong privacy guarantees with limited additional verification overhead.
-
M.S. Thesis
2013
PAC-Learning for Energy-based Models
Zhang, Xiang
Abstract
|
PDF
Title: PAC-Learning for Energy-based Models
Candidate: Zhang, Xiang
Advisor(s): LeCun, Yann; Sontag, David
Abstract:
In this thesis we prove that probably approximately correct (PAC) learning is guaranteed for the framework of energy-based models. Starting from the very basic inequalities, we establish our theory based on the existence of metric between hypothesis, to which the energy function is Lipschitz continuous. The result of the theory provides a new scheme of regularization called central regularization, which puts the effect of deep learning and feature learning in a new perspective. Experiments of this scheme shows that it achieved both good generalization error and testing error.
-
TR2013-961
2013
Transaction chains: achieving serializability with low latency in geo-distributed storage systems
Zhang, Yang;
Power, Russell; Zhou, Siyuan; Sovran, Yair; Aguilera, Marcos K.; Li, Jinyang
Abstract
|
PDF
Title: Transaction chains: achieving serializability with low latency in geo-distributed storage systems
Author(s): Zhang, Yang; Power, Russell; Zhou, Siyuan; Sovran, Yair; Aguilera, Marcos K.; Li, Jinyang
Abstract:
Currently, users of geo-distributed storage systems face a hard choice between having serializable transactions with high latency, or limited or no transactions with low latency. We show that it is possible to obtain both serializable transactions and low latency, under two conditions. First, transactions are known ahead of time, permitting an a priori static analysis of conflicts. Second, transactions are structured as transaction chains consisting of a sequence of hops, each hop modifying data at one server. To demonstrate this idea, we built Lynx, a geo-distributed storage system that offers transaction chains, secondary indexes, materialized join views, and geo-replication. Lynx uses static analysis to determine if each hop can execute separately while preserving serializability.if so, a client needs wait only for the first hop to complete, which occurs quickly. To evaluate Lynx, we built three applications: an auction service, a Twitter-like microblogging site and a social networking site. These applications successfully use chains to achieve low latency operation and good throughput.
-
TR2012-949
2012
A Note on the Complexity of Model-Checking Bounded Multi-Pushdown Systems
Bansal, Kshitij;
Demri, Stephane
Abstract
|
PDF
Title: A Note on the Complexity of Model-Checking Bounded Multi-Pushdown Systems
Author(s): Bansal, Kshitij; Demri, Stephane
Abstract:
In this note, we provide complexity characterizations of model checking multi-pushdown systems. Multi-pushdown systems model recursive concurrent programs in which any sequential process has a finite control. We consider three standard notions for boundedness: context boundedness, phase boundedness and stack ordering. The logical formalism is a linear-time temporal logic extending well-known logic CaRet but dedicated to multi-pushdown systems in which abstract operators (related to calls and returns) such as those for next-time and until are parameterized by stacks. We show that the problem is EXPTIME-complete for context-bounded runs and unary encoding of the number of context switches; we also prove that the problem is 2EXPTIME-complete for phase-bounded runs and unary encoding of the number of phase switches. In both cases, the value k is given as an input (whence it is not a constant of the model-checking problem), which makes a substantial difference in the complexity. In certain cases, our results improve previous complexity results.
-
Ph.D. Thesis
2012
Learning Hierarchical Feature Extractors For ImageRecognition
Boureau, Y-Lan
Abstract
|
PDF
Title: Learning Hierarchical Feature Extractors For ImageRecognition
Candidate: Boureau, Y-Lan
Advisor(s): LeCun, Yann
Abstract:
Telling cow from sheep is effortless for most animals, but requires much engineering for computers. In this thesis, we seek to tease out basic principles that underlie many recent advances in image recognition. First, we recast many methods into a common unsupervised feature extraction framework based on an alternation of coding steps, which encode the input by comparing it with a collection of reference patterns, and pooling steps, which compute an aggregation statistic summarizing the codes within some region of interest of the image.
Within that framework, we conduct extensive comparative evaluations of many coding or pooling operators proposed in the literature. Our results demonstrate a robust superiority of sparse coding (which decomposes an input as a linear combination of a few visual words) and max pooling (which summarizes a set of inputs by their maximum value). We also propose macrofeatures, which import into the popular spatial pyramid framework the joint encoding of nearby features commonly practiced in neural networks, and obtain significantly improved image recognition performance. Next, we analyze the statistical properties of max pooling that underlie its better performance, through a simple theoretical model of feature activation. We then present results of experiments that confirm many predictions of the model. Beyond the pooling operator itself, an important parameter is the set of pools over which the summary statistic is computed. We propose locality in feature configuration space as a natural criterion for devising better pools. Finally, we propose ways to make coding faster and more powerful through fast convolutional feedforward architectures, and examine how to incorporate supervision into feature extraction schemes. Overall, our experiments offer insights into what makes current systems work so well, and state-of-the-art results on several image recognition benchmarks.
-
Ph.D. Thesis
2012
On populations, haplotypes and genome sequencing
Franquin, Pierre
Abstract
|
PDF
Title: On populations, haplotypes and genome sequencing
Candidate: Franquin, Pierre
Advisor(s): Mishra, Bud
Abstract:
Population genetics has seen a renewed interest since the completion of the human genome project. With the availability of rapidly growing volumes of genomic data, the scientific and medical communities have been optimistic that better understanding of human diseases as well as their treatment were imminent. Many population genomic models and association studies have been designed (or redesigned) to address these problems. For instance, the genome-wide association studies (GWAS) had raised hopes for finding disease markers, personalized medicine and rational drug design. Yet, as of today, they have not yielded results that live up to their promise and have only led to a frustrating disappointment.
Intrigued, but not deterred by these challenges, this dissertation visits the different aspects of these problems. In the first part, we will review the different models and theories of population genetics that are now challenged. We will propose our own implementation of a model to test different hypotheses. This effort will hopefully help us in understanding whether our expectations were unreasonably too high or if we had ignored a crucial piece of information. When discussing association studies, we must not forget that we rely on data that are produced by sequencing technologies, so far available. We have to ensure that the quality of this data is reasonably good for GWAS. Unfortunately, as we will see in the second part, despite the existence of a diverse set of sequencing technologies, none of them can produce haplotypes with phasing, which appears to be the most important type of sequence data needed for association studies. To address this challenge, we propose a novel approach for a sequencing technology, called SMASH that allows us to create the quality and type of haplotypic genome sequences necessary for efficient population genetics.
-
Ph.D. Thesis
2012
Optimizing Machine Translation by Learning to Search
Galron, Daniel
Abstract
|
PDF
Title: Optimizing Machine Translation by Learning to Search
Candidate: Galron, Daniel
Advisor(s): Melamed, Dan
Abstract:
We present a novel approach to training discriminative tree-structured machine translation systems by learning to search. We describe three primary innovations in this work: a new parsing coordinator architecture and algorithms to synthesize the required training examples for the learning algorithm; a new semiring that provides an unbiased way to compare translations; and a new training objective that measures whether a translation inference improves the quality of a translation. We also apply the reinforcement learning concept of exploration to SMT. Finally, we empirically evaluate the effects of our innovations on the quality of translations output by our system.
-
Ph.D. Thesis
2012
Flexible-Cost SLAM
Grimes, Matthew
Abstract
|
PDF
Title: Flexible-Cost SLAM
Candidate: Grimes, Matthew
Advisor(s): LeCun, Yann
Abstract:
The ability of a robot to track its position and its surroundings is critical in mobile robotics applications, such as autonomous transport, farming, search-and-rescue, and planetary exploration.
As a foundational building block to such tasks, localization must remain reliable and unobtrusive. For example, it must not provide an unneeded level of precision, when the cost of doing so displaces higher-level tasks from a busy CPU. Nor should it produce noisy estimates on the cheap, when there are CPU cycles to spare.
This thesis explores localization solutions that provide exactly the amount of accuracy needed to a given task. We begin with a real-world system used in the DARPA Learning Applied to Ground Robotics (LAGR) competition. Using a novel hybrid of wheel and visual odometry, we cut the cost of visual odometry from 100% of a CPU to 5%, clearing room for other critical visual processes, such as long-range terrain classification. We present our hybrid odometer in chapter 2.
Next, we describe a novel SLAM algorithm that provides a means to choose the desired balance between cost and accuracy. At its fastest setting, our algorithm converges faster than previous stochastic SLAM solvers, while maintaining significantly better accuracy. At its most accurate, it provides the same solution as exact SLAM solvers. Its main feature, however, is the ability to flexibly choose any point between these two extremes of speed and precision, as circumstances demand. As a result, we are able to guarantee real-time performance at each timestep on city-scale maps with large loops. We present this solver in chapter 3, along with results from both commonly available datasets and Google Street View data.
Taken as a whole, this thesis recognizes that precision and efficiency can be competing values, whose proper balance depends on the application and its fluctuating circumstances. It demonstrates how a localizer can and should fit its cost to the task at hand, rather than the other way around. In enabling this flexibility, we demonstrate a new direction for SLAM research, as well as provide a new convenience for end-users, who may wish to map the world without stopping it.
-
Ph.D. Thesis
2012
SMT Beyond DPLL(T): A New Approach to Theory Solvers and Theory Combination
Jovanovic, Dejan
Abstract
|
PDF
Title: SMT Beyond DPLL(T): A New Approach to Theory Solvers and Theory Combination
Candidate: Jovanovic, Dejan
Advisor(s): Barrett, Clark
Abstract:
Satisifiability modulo theories (SMT) is the problem of deciding whether a given logical formula can be satisifed with respect to a combination of background theories. The past few decades have seen many significant developments in the field, including fast Boolean satisfiability solvers (SAT), efficient decision procedures for a growing number of expressive theories, and frameworks for modular combination of decision procedures. All these improvements, with addition of robust SMT solver implementations, culminated with the acceptance of SMT as a standard tool in the fields of automated reasoning and computer aided verification. In this thesis we develop new decision procedures for the theory of linear integer arithmetic and the theory of non-linear real arithmetic, and develop a new general framework fro combination of decision procedures. The new decision procedures integrate theory specific reasoning and the Boolean search to provide more powerful and efficient procedures, and allow a more expressive language for explaining problematic states. The new framework for combination of decision procedures overcomes the complexity limitations and restrictions on the theories imposed by the standard Nelson-Oppen approach.
-
Ph.D. Thesis
2012
An Adaptive Fast Multipole Method-Based PDE Solver in Three Dimensions
Langston, Matthew Harper
Abstract
|
PDF
Title: An Adaptive Fast Multipole Method-Based PDE Solver in Three Dimensions
Candidate: Langston, Matthew Harper
Advisor(s): Zorin, Denis
Abstract:
Many problems in scientific computing require the accurate and fast solution to a variety of elliptic PDEs. These problems become increasingly dif.cult in three dimensions when forces become non-homogeneously distributed and geometries are complex.
We present an adaptive fast volume solver using a new version of the fast multipole method, incorporated with a pre-existing boundary integral formulation for the development of an adaptive embedded boundary solver.
For the fast volume solver portion of the algorithm, we present a kernel-independent, adaptive fast multipole method of arbitrary order accuracy for solving elliptic PDEs in three dimensions with radiation boundary conditions. The algorithm requires only a Greenâs function evaluation routine for the governing equation and a representation of the source distribution (the right-hand side) that can be evaluated at arbiÂtrary points.
The performance of the method is accelerated in two ways. First, we construct a piecewise polynomial approximation of the right-hand side and compute far-.eld expansions in the FMM from the coef.cients of this approximation. Second, we precompute tables of quadratures to handle the near-.eld interactions on adaptive octree data structures, keeping the total storage requirements in check through the exploitation of symmetries. We additionally show how we extend the free-space volume solver to solvers with periodic and well as Dirichlet boundary conditions.
For incorporation with the boundary integral solver, we develop interpolation methods to maintain the accuracy of the volume solver. These methods use the existing FMM-based octree structure to locate apÂpropriate interpolation points, building polynomial approximations to this larger set of forces and evaluating these polynomials to the locally under-re.ned grid in the area of interest.
We present numerical examples for the Laplace, modi.ed Helmholtz and Stokes equations for a variety of boundary conditions and geometries as well as studies of the interpolation procedures and stability of far-.eld and polynomial constructions.
-
Ph.D. Thesis
2012
Acquiring information from wider scope to improve event extraction
Liao, Shasha
Abstract
|
PDF
Title: Acquiring information from wider scope to improve event extraction
Candidate: Liao, Shasha
Advisor(s): Grishman, Ralph
Abstract:
Event extraction is a particularly challenging type of information extraction (IE). Most current event extraction systems rely on local information at the phrase or sentence level. However, this local context may be insufficient to resolve ambiguities in identifying particular types of events; information from a wider scope can serve to resolve some of these ambiguities.
In this thesis, we first investigate how to extract supervised and unsupervised features to improve a supervised baseline system. Then, we present two additional tasks to show the benefit of wider scope features in semi-supervised learning (self-training) and active learning (co-testing). Experiments show that using features from wider scope can not only aid a supervised local event extraction baseline system, but also help the semi-supervised or active learning approach.
-
M.S. Thesis
2012
A tool for extracting and indexing spatio-temporal information from biographical articles in Wikipedia
Morton-Owens, Emily
Abstract
|
PDF
Title: A tool for extracting and indexing spatio-temporal information from biographical articles in Wikipedia
Candidate: Morton-Owens, Emily
Advisor(s): Davis, Ernest
Abstract:
The Kivrin program, consisting of a crawler, a data collection, and a front-end interface, attempts to extract biographical information from Wikipedia, specifically, spatio-temporal information--who was where when--and make it easily searchable. Some of the considerations standard to moving object databases do not apply in this context, because the texts by their nature discuss a discontinuous series of notable moments. The paper discusses different methods of arranging the crawler queue priority to find more important figures and of disambiguating locations when the same place name (toponym) is shared among several places. When lifespan information is not available, it is estimated to exclude sightings outside the person's plausible lifetime.
The results are grouped by the number of sightings in the user's search range to minimize the visibility of false drops when they occur. Erroneous results are more visible in times and places where fewer legitimate sightings are recorded; the data is skewed, like Wikipedia itself, towards the U.S. and Western Europe and relatively recent history. The system could be most improved by using statistical methods to predict which terms are more likely personal names than place names and to identify verbs that precede location information rather than personal names. It could also be improved by incorporating the times as a third dimension in the geospatial index, which would allow "near" queries to include that dimension rather than a strict range.
The program can be used at http://linserv1.cims.nyu.edu:48866/cgi-bin/index.cgi
-
Ph.D. Thesis
2012
Mobile Accessibility Tools for the Visually Impaired
Paisios, Nektarios
Abstract
|
PDF
Title: Mobile Accessibility Tools for the Visually Impaired
Candidate: Paisios, Nektarios
Advisor(s): Subramanian; Lakshminarayanan
Abstract:
Visually impaired users are in dire need of better accessibility tools. The past few years have witnessed an exponential growth in the computing capabilities and onboard sensing capabilities of mobile phones making them an ideal candidate for building next-generation applications. We believe that the mobile device can play a significant role in the future for aiding visually impaired users in day-to-day activities with simple and usable mobile accessibility tools. This thesis describes the design, implementation, evaluation and user-study based analysis of four different mobile accessibility applications.
Our first system is the design of a highly accurate and usable mobile navigational guide that uses Wi-Fi and accelerometer sensors to navigate unfamiliar environments. A visually impaired user can use the system to construct a virtual topological map across points of interest within a building based on correlating the user' walking patterns (with turn signals) with the Wi-Fi and accelerometer readings. The user can subsequently use the map to navigate previously traveled routes. Our second system, Mobile Brailler, presents several prototype methods of text entry on a modern touch screen mobile phone that are based on the Braille alphabet and thus are convenient for visually impaired users. Our third system enables visually impaired users to leverage the camera of a mobile device to accurately recognize currency bills even if the images are partially or highly distorted. The final system enables visually impaired users to determine whether a pair of clothes, in this case of a tie and a shirt, can be worn together or not, based on the current social norms of color-matching.
We believe that these applications together, provide a suite of important mobile accessibility tools to enhance four critical aspects of a day-to-day routine of a visually impaired user: to navigate easily, to type easily, to recognize currency bills (for payments) and to identify matching clothes.
-
Ph.D. Thesis
2012
Reusable Software Infrastructure for Stream Processing
Soule, Robert
Abstract
|
PDF
Title: Reusable Software Infrastructure for Stream Processing
Candidate: Soule, Robert
Advisor(s): Grimm, Robert
Abstract:
Developers increasingly use streaming languages to write their data processing applications. While a variety of streaming languages exist, each targeting a particular application domain, they are all similar in that they represent a program as a graph of streams (i.e. sequences of data items) and operators (i.e. data transformers). They are also similar in that they must process large volumes of data with high throughput. To meet this requirement, compilers of streaming languages must provide a variety of streaming-specific optimizations, including automatic parallelization. Traditionally, when many languages share a set of optimizations, language implementors translate the source languages into a common representation called an intermediate language (IL). Because optimizations can modify the IL directly, they can be re-used by all of the source languages, reducing the overall engineering effort. However, traditional ILs and their associated optimizations target single-machine, single-process programs. In contrast, the kinds of optimizations that compilers must perform in the streaming domain are quite different, and often involve reasoning across multiple machines. Consequently, existing ILs are not suited to streaming languages.
This thesis addresses the problem of how to provide a reusable infrastructure for stream processing languages. Central to the approach is the design of an intermediate language specifically for streaming languages and optimizations. The hypothesis is that an intermediate language designed to meet the requirements of stream processing can assure implementation correctness; reduce overall implementation effort; and serve as a common substrate for critical optimizations. In evidence, this thesis provides the following contributions: (1) a catalog of common streaming optimizations that helps define the requirements of a streaming IL; (2) a calculus that enables reasoning about the correctness of source language translation and streaming optimizations; and (3) an intermediate language that preserves the semantics of the calculus, while addressing the implementation issues omitted from the calculus This work significantly reduces the effort it takes to develop stream processing languages, and jump-starts innovation in language and optimization design.
-
TR2012-948
2012
Hitting the Sweet Spot for Streaming Languages: Dynamic Expressivity with Static Optimization
Soulé, Robert;
Gordon, Michael I.; Amarasinghe, Saman; Grimm, Robert; Hirzel, Martin
Abstract
|
PDF
Title: Hitting the Sweet Spot for Streaming Languages: Dynamic Expressivity with Static Optimization
Author(s): Soulé, Robert; Gordon, Michael I.; Amarasinghe, Saman; Grimm, Robert; Hirzel, Martin
Abstract:
Developers increasingly use stream processing languages to write applications that process large volumes of data with high throughput. Unfortunately, when choosing which stream processing language to use, they face a difficult choice. On the one hand, dynamically scheduled languages allow developers to write a wider range of applications, but cannot take advantage of many crucial optimizations. On the other hand, statically scheduled languages are extremely performant, but cannot express many important streaming applications.
This paper presents the design of a hybrid scheduler for stream processing languages. The compiler partitions the streaming application into coarse-grained subgraphs separated by dynamic rate boundaries. It then applies static optimizations to those subgraphs. We have implemented this scheduler as an extension to the StreamIt compiler, and evaluated its performance against three scheduling techniques used by dynamic systems: OS thread, demand, and no-op. Our scheduler not only allows the previously static version of StreamIt to run dynamic rate applications, but it outperforms the three dynamic alternatives. This demonstrates that our scheduler strikes the right balance between expressivity and performance for stream processing languages.
-
Ph.D. Thesis
2012
Building scalable geo-replicated storage backends for web applications
Sovran, Yair
Abstract
|
PDF
Title: Building scalable geo-replicated storage backends for web applications
Candidate: Sovran, Yair
Advisor(s): Li, Jinyang
Abstract:
Web applications increasingly require a storage system that is both scalable and can replicate data across many distant data centers or sites. Most existing storage solutions fall into one of two categories: Traditional databases offer strict consistency guarantees and programming ease, but are difficult to scale in a geo-replicated setting. NoSQL stores are scalable and efficient, but have weak consistency guarantees, placing the burden of ensuring consistency on programmers. In this dissertation, we describe two systems that help bridge the two extremes, providing scalable, geo-replicated storage for web applications, while also easy to program for. Walter is a key-value store that supports transactions and replicating data across distant sites. A key feature underlying Walter is a new isolation property: Parallel Snapshot Isolation (PSI). PSI allows Walter to replicate data asynchronously, while providing strong guarantees within each site. PSI does not allow write-write conflicts, alleviating the burden of writing conflict resolution logic. To prevent write-write conflicts and implement PSI, Walter uses two new and simple techniques: preferred sites and counting sets. Lynx is a distributed database backend for scaling latency-sensitive web applications. Lynx supports optimizing queries via data denormalization, distributed secondary indexes, and materialized join views. To preserve data constraints across denormalized tables and secondary indexes, Lynx relies on the a novel primitive: Distributed Transaction Chain (DTC). A DTC groups a sequence of transactions to be executed on different nodes while providing two guarantees. First, all transactions in a DTC execute exactly once despite failures. Second, transactions from concurrent DTCs are interleaved consistently on common nodes. We built several web applications on top of Walter and Lynx: an auction service, a microblogging service, and a social networking website. We have found that building web applications using Walter and Lynx is quick and easy. Our experiments show that the resulting applications are capable of providing scalable, low latency operation across multiple geo-replicated sites.
-
Ph.D. Thesis
2012
Rapid Training of Information Extraction with Local and Global Data Views
Sun, Ang
Abstract
|
PDF
Title: Rapid Training of Information Extraction with Local and Global Data Views
Candidate: Sun, Ang
Advisor(s): Grishman, Ralph
Abstract:
This dissertation focuses on fast system development for Information Extraction (IE). State-of-the-art systems heavily rely on extensively annotated corpora, which are slow to build for a new domain or task. Moreover, previous systems are mostly built with local evidence such as words in a short context window or features that are extracted at the sentence level. They usually generalize poorly on new domains.
This dissertation presents novel approaches for rapidly training an IE system for a new domain or task based on both local and global evidence. Specifically, we present three systems: a relation type extension system based on active learning, a relation type extension system based on semi-supervised learning, and a cross-domain bootstrapping system for domain adaptive named entity extraction.
The active learning procedure adopts features extracted at the sentence level as the local view and distributional similarities between relational phrases as the global view. It builds two classifiers based on these two views to find the most informative contention data points to request human labels so as to reduce annotation cost.
The semi-supervised system aims to learn a large set of accurate patterns for extracting relations between names from only a few seed patterns. It estimates the confidence of a name pair both locally and globally: locally by looking at the patterns that connect the pair in isolation; globally by incorporating the evidence from the clusters of patterns that connect the pair. The use of pattern clusters can prevent semantic drift and contribute to a natural stopping criterion for semi-supervised relation pattern discovery.
For adapting a named entity recognition system to a new domain, we propose a cross-domain bootstrapping algorithm, which iteratively learns a model for the new domain with labeled data from the original domain and unlabeled data from the new domain. We first use word clusters as global evidence to generalize features that are extracted from a local context window. We then select self-learned instances as additional training examples using multiple criteria, including some based on global evidence.
-
Ph.D. Thesis
2012
Combating Sybil attacks in cooperative systems
Tran, Nguyen
Abstract
|
PDF
Title: Combating Sybil attacks in cooperative systems
Candidate: Tran, Nguyen
Advisor(s): Li, Jinyang
Abstract:
Cooperative systems are ubiquitous nowadays. In a cooperative system, end users contribute resource to run the service instead of only receiving the service passively from the system. For example, users upload and comment pictures and videos on Flicker and YouTube, users submit and vote on news articles on Digg. As another example, users in BitTorrent contribute bandwidth and storage to help each other download content. As long as users behave as expected, these systems benefit immensely from user contribution. In fact, five out of ten most popular websites are operating in this cooperative fashion (Facebook, YouTube, Blogger, Twitter, Wikipedia). BitTorrent is dominating the global Internet traffic.
A robust cooperative system cannot blindly trust that its users will truthfully participate in the system. Malicious users seek to exploit the systems for profit. Selfish users consume but avoid to contribute resource. For example, adversaries have manipulated the voting system of Digg to promote their articles of dubious quality. Selfish users in public BitTorrent communities leave the system to avoid uploading files to others, resulting in drastic performance degradation for these content distribution systems. The ultimate way to disrupt security and incentive mechanisms of cooperative systems is using Sybil attacks, in which the adversary creates many Sybil identities (fake identities) and use them to disrupt the systems' normal operation. No security and incentive mechanism works correctly if the systems do not have a robust identity management that can defend against Sybil attacks.
This thesis provides robust identity management schemes which are resilient to the Sybil attack, and use them to secure and incentivize user contribution in several example cooperative systems. The main theme of this work is to leverage the social network among users in designing secure and incentive-compatible cooperative systems. First, we develop a distributed admission control protocol, called Gatekeeper, that leverages social network to admit most honest user identities and only few Sybil identities into the systems. Gatekeeper can be used as a robust identity management for both centralized and decentralized cooperative systems. Second, we provide a vote aggregation system for content voting systems, called SumUp, that can prevent an adversary from casting many bogus votes for a piece of content using the Sybil attack. SumUp leverages unique properties of content voting systems to provide significantly better Sybil defense compared with applying a general admission control protocol such as \gatekeeper. Finally, we provide a robust reputation system, called Credo, that can be used to incentivize bandwidth contribution in peer-to-peer content distribution networks. Credo reputation can capture user contribution, and is resilient to both Sybil and collusion attacks.
-
Ph.D. Thesis
2012
Multi-species biclustering: An integrative method to identify functional gene conservation between multiple species
Waltman, Peter
Abstract
|
PDF
Title: Multi-species biclustering: An integrative method to identify functional gene conservation between multiple species
Candidate: Waltman, Peter
Advisor(s): Bonneau, Richard
Abstract:
Background : Several recent comparative functional genomics projects have indicated that the co-regulation of many genes is conserved across species, at least in part. This suggests that comparative analysis of functional genomics data-sets could prove powerful in identifying co-regulated groups that are conserved across multiple species.
Results : We present recent work to extend our cMonkey algorithm to simultaneously bicluster heterogeneous data from multiple species to identify conserved modules of orthologous genes, which can yield evolutionary insights into the formation of regulatory modules. We also present results from the multi-species analysis to two triplets of bacteria. The first of these is a triplet of Gram-positive bacteria consisting of Bacillus subtilis, Bacillus anthracis, and Listeria monocytogenes, while the second is a triplet of Gram-negative bacteria that includes Escherichia coli, Salmonella typhimurium and Vibrio cholerae. Finally, we will present initial results from the multi-species biclustering analysis of human and mouse hematopoietic differentiation data.
Conclusion : Analysis of biclusters obtained revealed a surprising number of gene groups with conserved modularity and high biological significance as judged by several measures of cluster quality. We also highlight cases of interest from the Gram-positive triplet, including one that suggests a temporal difference in the expression of genes governing sporulation in the two Bacillus species. While analysis of the mouse and human hematopoietic differentiation is preliminary, it indicates the applicability of this analysis to eukaryotic systems, including comparison of cancer model systems. Finally, we suggest ways in which this analysis could be extended to identify divergent modules that may exist between normal and disease tissue.
-
Ph.D. Thesis
2011
Collusion Preserving Computation
Alwen, Joel
Abstract
|
PDF
Title: Collusion Preserving Computation
Candidate: Alwen, Joel
Advisor(s): Dodis, Yevgeniy
Abstract:
In collusion-free protocols, subliminal communication is impossible and parties are thus unable to communicate any information beyond what the protocol allows". Collusion-free protocols are interesting for several reasons, but have specifically attracted attention because they can be used to reduce trust in game-theoretic mechanisms. Collusion-free protocols are impossible to achieve (in general) when all parties are connected by point-to-point channels, but exist under certain physical assumptions (Lepinksi et al., STOC 2005) or in specific network topologies (Alwen et al., Crypto 2008).
In addition to proposing the definition, we explore necessary properties of the underlying communication resource. Next we provide a general feasibility result for collusion-preserving computation of arbitrary functionalities. We show that the resulting protocols enjoy an elegant (and surprisingly strong) fallback security even in the case when the underlying communication resource acts in a Byzantine manner. Finally, we investigate the implications of these results in the context of mechanism design. -
Ph.D. Thesis
2011
Re-architecting Web and Mobile Information Access for Emerging Regions
Chen, Jay
Abstract
|
PDF
Title: Re-architecting Web and Mobile Information Access for Emerging Regions
Candidate: Chen, Jay
Advisor(s): Subramanian; Lakshminarayanan
Abstract:
Providing access to information for people in emerging regions is an important problem. Over the past decade there have been many proposed and increasingly numerous deployed systems to enable information access, but successes are few and modest at best. Internet in emerging regions is still generally unusable or intolerably slow. Mobile phone applications are either not designed for the phones that poor people own, otherwise, the applications lack functionality, are difficult to use, or expensive to operate. In this work we focus on enabling digital information access for people in emerging regions.
To advance the state of the art, we contribute numerous observations about how people access information in emerging regions, why the current models for web access and SMS platforms are broken, and techniques to enable applications over constrained Internet or SMS. The mechanisms presented here were designed after extensive field work in several different regions including rural, peri-urban, and urban areas in India, Kenya, Ghana, and Mexico. Multiple user studies were conducted throughout the course of system design and prototyping. We present a novel set of context appropriate platforms and tools, some spanning several layers of the networking stack. Five complete systems were implemented and deployed in the field. First, Event Logger for Firefox (ELF) is an easily deployable Firefox extension which functions as both a web browsing analysis tool and an in-browser web optimization platform. Second, RuralCafe provides a platform for web search and browsing over extremely slow or intermittent networks. Third, Contextual Information Portals (CIP) provide cached repositories of web pages tailored to the particular context in which it is to be used. Fourth, UjU is a mobile application platform that simplies the design of new SMS-based mobile applications. Finally, SMSFind is a SMS-based search service that runs on mobile phones without setup or subscription to a data plan.
Taken as a whole, the systems here are a comprehensive solution for addressing the problem of enabling digital information access in emerging regions. -
Ph.D. Thesis
2011
Automatic Deduction for Theories of Algebraic Data Types
Chikanian, Igor
Abstract
|
PDF
Title: Automatic Deduction for Theories of Algebraic Data Types
Candidate: Chikanian, Igor
Advisor(s): Barrett, Clark
Abstract:
In this thesis we present formal logical systems, concerned with reasoning about algebraic data types.
The first formal system is based on the quantifier-free calculus (outermost universally quantified). This calculus is comprised of state change rules, and computations are performed by successive applications of these rules. Thereby, our calculus gives rise to an abstract decision procedure. This decision procedure determines if a given formula involving algebraic type members is valid. It is shown that this calculus is sound and complete. We also examine how this system performs practically and give experimental results. Our main contribution, as compared to previous work on this subject,is a new and more efficient decision procedure for checking satisfiability of the universal fragment within the theory of algebraic data types.
The second formal system, called Term Builder, is the deductive system based on higher order type theory, which subsumes second order and higher order logics. The main purpose of this calculus is to formulate and prove theorems about algebraic or other arbitrary user-defined types.Term Builder supports proof objects and is both, an interactive theorem prover, and verifier. We describe the built-in deductive capabilities of Term Builder and show its consistency. The logic represented by our prover is intuitionistic. Naturally, it is also incomplete and undecidable, but its expressive power is much higher than that of the first formal system.
Among our achievements in building this theorem prover is an elegant and intuitive GUI for building proofs. Also, a new feature from the foundational viewpoint is that, in contrast with other approaches, we have uniqueness-of-types property, which is not modulo beta-conversion.
-
TR2011-943
2011
Two-Level Overlapping Schwarz Algorithms for a Staggered Discontinuous Galerkin Method
Chung, Eric T.;
Kim, Hyea Hyun; Widlund, Olof B.
Abstract
|
PDF
Title: Two-Level Overlapping Schwarz Algorithms for a Staggered Discontinuous Galerkin Method
Author(s): Chung, Eric T.; Kim, Hyea Hyun; Widlund, Olof B.
Abstract:
Two overlapping Schwarz algorithms are developed for a discontinuous Galerkin (DG) finite element approximation of second order scalar elliptic problems in both two and three dimensions. The discontinuous Galerkin formulation is based on a staggered discretization introduced by Chung and Engquist for the acoustic wave equation. Two types of coarse problems are introduced for the two-level Schwarz algorithms. The first is built on a non-overlapping subdomain partition, which allows quite general subdomain partitions, and the second on introducing an additional coarse triangulation that can also be quite independent of the fine triangulation. Condition number ounds are established and numerical results are presented.
-
TR2011-946
2011
An Alternative Coarse Space for Irregular Subdomains and an Overlapping Schwarz Algorithm
Dohrmann, Clark R.;
Widlund, Olof B.
Abstract
|
PDF
Title: An Alternative Coarse Space for Irregular Subdomains and an Overlapping Schwarz Algorithm
Author(s): Dohrmann, Clark R.; Widlund, Olof B.
Abstract:
In earlier work on domain decomposition methods for elliptic problems in the plane, an assumption that each subdomain is triangular, or a union of a few coarse triangles, has often been made. This is similar to what is required in geometric multigrid theory and is unrealistic if the subdomains are produced by a mesh partitioner. In an earlier paper, coauthored with Axel Klawonn, the authors introduced a coarse subspace for an overlapping Schwarz method with one degree of freedom for each subdomain vertex and one for each subdomain edge. A condition number bound proportional to \((1+\log(H/h))^2(1+H/\delta)\) was established assuming only that the subdomains are John domains; here \(H/\delta\) measures the relative overlap between neighboring subdomains and \(H/h\) the maximum number of elements across individual subdomains. We were also able to relate the rate of convergence to a parameter in an isoperimetric inequality for the subdomains into which the domain of the problem has been partitioned.
In this paper, the dimension of the coarse subspace is decreased by using only one degree of freedom for each subdomain vertex; if all subdomains have three edges, this leads to a reduction of the dimension of the coarse subspace by approximately a factor four. In addition, the condition number bound is shown to be proportional to \((1+\log(H/h))(1+H/\delta)\) under a quite mild assumption on the relative length of adjacent subdomain edges.
In this study, the subdomains are assumed to be uniform in the sense of Peter Jones. As in our earlier work, the results are insensitive to arbitrary large jumps in the coefficients of the elliptic problem across the interface between the subdomains.
Numerical results are presented which confirm the theory and demonstrate the usefulness of the algorithm for a variety of mesh decompositions and distributions of material properties. It is also shown that the new algorithm often converges faster than the older one in spite of the fact that the dimension of the coarse space has been decreased considerably.
-
TR2011-939
2011
Parsing All of C by Taming the Preprocessor
Gazzillo, Paul;
Grimm, Robert
Abstract
|
PDF
Title: Parsing All of C by Taming the Preprocessor
Author(s): Gazzillo, Paul; Grimm, Robert
Abstract:
Given the continuing popularity of C for building large-scale programs, such as Linux, Apache, and Bind, it is critical to provide effective tool support, including, for example, code browsing, bug finding, and automated refactoring. Common to all such tools is a need to parse C. But C programs contain not only the C language proper but also preprocessor invocations for file inclusion (#include), conditional compilation (#if, #ifdef, and so on), and macro definition/expansion (#define). Worse, the preprocessor is a textual substitution system, which is oblivious to C constructs and operates on individual tokens. At the same time, the preprocessor is indispensable for improving C's expressivity, abstracting over software/hardware dependencies, and deriving variations from the same code base. The x86 version of the Linux kernel, for example, depends on about 7,600 header files for file inclusion, 7,000 configuration variables for conditional compilation, and 520,000 macros for code expansion.
In this paper, we present a new tool for parsing all of C, including arbitrary preprocessor use. Our tool, which is called SuperC, is based on a systematic analysis of all interactions between lexing, preprocessing, and parsing to ensure completeness. It first lexes and preprocesses source code while preserving conditionals. It then parses the result using a novel variant of LR parsing, which automatically forks parsers when encountering a conditional and merges them again when reaching the same input in the same state. The result is a well-formed AST, containing static choice nodes for conditionals. While the parsing algorithm and engine are new, neither grammar nor LR parser table generator need to change. We discuss the results of our problem analysis, the parsing algorithm itself, the pragmatics of building a real-world tool, and a demonstration on the x86 version of the Linux kernel.
-
Ph.D. Thesis
2011
Efficient Cryptographic Primitives for Non-Interactive Zero-Knowledge Proofs and Applications
Haralambiev, Kristiyan
Abstract
|
PDF
Title: Efficient Cryptographic Primitives for Non-Interactive Zero-Knowledge Proofs and Applications
Candidate: Haralambiev, Kristiyan
Advisor(s): Shoup, Victor
Abstract:
Non-interactive zero-knowledge (NIZK) proofs have enjoyed much interest in cryptography since they were introduced more than twenty years ago by Blum et al. [BFM88]. While quite useful when designing modular cryptographic schemes, until recently NIZK could be realized efficiently only using certain heuristics. However, such heuristic schemes have been widely criticized. In this work we focus on designing schemes which avoid them. In [GS08], Groth and Sahai presented the first efficient (and currently the only) NIZK proof system in the standard model. The construction is based on bilinear maps and is limited to languages of certain satisfiable system of equations. Given this expressibility limitation of the system of equations, we are interested in cryptographic primitives that are "compatible" with it. Equipped with such primitives and Groth-Sahai proof system, we show how to construct cryptographic schemes efficiently in a modular fashion.
In this work, we describe properties required by any cryptographic scheme to mesh well with Groth-Sahai proofs. Towards this, we introduce the notion of "structure-preserving" cryptographic scheme. We present the first constant-size structure-preserving signature scheme for messages consisting of general bilinear group elements. This allows us (for the first time) to instantiate efficiently a modular construction of round-optimal blind signature based on the framework of Fischlin [Fis06].
Our structure-preserving homomorphic trapdoor commitment schemes yield efficient leakage-resilient signatures (in the bounded leakage model) which satisfy the standard security requirements and additionally tolerates any amount of leakage; all previous works satisfied at most two of those three properties.
Lastly, we build a structure-preserving encryption scheme which satisfies the standard CCA security requirements. While somewhat similar to the notion of verifiable encryption, it provides better properties and yields the first efficient two-party protocol for joint ciphertext computation. Note that the efficient realization of such a protocol was not previously possible even using the heuristics mentioned above.
Lastly, in this line of work, we revisit the notion of simulation extractability and define "true-simulation extractable" NIZK proofs. Although quite similar to the notion of simulation-sound extractable NIZK proofs, there is a subtle but rather important difference which makes it weaker and easier to instantiate efficiently. As it turns out, in many scenarios, this new notion is sufficient, and using it, we can construct efficient leakage resilient signatures and CCA encryption scheme. -
TR2011-940
2011
Sharing is Caring: Combination of Theories
Jovanovic, Dejan;
Barrett, Clark
Abstract
|
PDF
Title: Sharing is Caring: Combination of Theories
Author(s): Jovanovic, Dejan; Barrett, Clark
Abstract:
One of the main shortcomings of the traditional methods for combining theories is the complexity of guessing the arrangement of the variables shared by the individual theories. This paper presents a reformulation of the Nelson-Oppen method that takes into account explicit equality propagation and can ignore pairs of shared variables that the theories do not care about. We show the correctness of the new approach and present care functions for the theory of uninterpreted functions and the theory of arrays. The effectiveness of the new method is illustrated by experimental results demonstrating a dramatic performance improvement on benchmarks combining arrays and bit-vectors.
-
Ph.D. Thesis
2011
Learning Feature Hierarchies for Object Recognition
Kavukcuoglu, Koray
Abstract
|
PDF
Title: Learning Feature Hierarchies for Object Recognition
Candidate: Kavukcuoglu, Koray
Advisor(s): LeCun, Yann
Abstract:
In this thesis we study unsupervised learning algorithms for training feature extractors and building deep learning models. We propose sparse-modeling algo- rithms as the foundation for unsupervised feature extraction systems. To reduce the cost of the inference process required to obtain the optimal sparse code, we model a feed-forward function that is trained to predict this optimal sparse code. Using an efficient predictor function enables the use of sparse coding in hierarchical models for object recognition. We demonstrate the performance of the developed system on several recognition tasks, including object recognition, handwritten digit classification and pedestrian detection. Robustness to noise or small variations in the input is a very desirable property for a feature extraction algorithm. In order to train locally-invariant feature extractors in an unsupervised manner, we use group sparsity criteria that promote similarity between the dictionary elements within a group. This model produces locally-invariant representations under small pertur- bations of the input, thus improving the robustness of the features. Many sparse modeling algorithms are trained on small image patches that are the same size as the dictionary elements. This forces the system to learn multiple shifted versions of each dictionary element. However, when used convolutionally over large im- ages to extract features, these models produce very redundant representations. To avoid this problem, we propose convolutional sparse coding algorithms that yield a richer set of dictionary elements, reduce the redundancy of the representation and improve recognition performance.
-
TR2011-944
2011
Effective Synthesis of Asynchronous Systems from GR(1) Specifications
Klein, Uri;
Piterman, Nir; Pnueli, Amir
Abstract
|
PDF
Title: Effective Synthesis of Asynchronous Systems from GR(1) Specifications
Author(s): Klein, Uri; Piterman, Nir; Pnueli, Amir
Abstract:
We consider automatic synthesis from linear temporal logic specifications for asynchronous systems. We aim the produced reactive systems to be used as software in a multi-threaded environment. We extend previous reduction of asynchronous synthesis to synchronous synthesis to the setting of multiple input and multiple output variables. Much like synthesis for synchronous designs, this solution is not practical as it requires determinization of automata on infinite words and solution of complicated games. We follow advances in synthesis of synchronous designs, which restrict the handled specifications but achieve scalability and efficiency. We propose a heuristic that, in some cases, maintains scalability for asynchronous synthesis. Our heuristic can prove that specifications are realizable and extract designs. This is done by a reduction to synchronous synthesis that is inspired by the theoretical reduction.
-
TR2011-938
2011
Formalization and Automated Verification of RESTful Behavior
Klein, Uri;
Namjoshi, Kedar S.
Abstract
|
PDF
Title: Formalization and Automated Verification of RESTful Behavior
Author(s): Klein, Uri; Namjoshi, Kedar S.
Abstract:
REST is a software architectural style used for the design of highly scalable web applications. Interest in REST has grown rapidly over the past decade, spurred by the growth of open web APIs. On the other hand, there is also considerable confusion surrounding REST: many examples of supposedly RESTful APIs violate key REST constraints. We show that the constraints of REST and of RESTful HTTP can be pre- cisely formulated within temporal logic. This leads to methods for model checking and run-time verfication of RESTful behavior. We formulate several relevant verification questions and analyze their complexity.
-
Ph.D. Thesis
2011
Topics in Formal Synthesis and Modeling
Klein, Uri
Abstract
|
PDF
Title: Topics in Formal Synthesis and Modeling
Candidate: Klein, Uri
Advisor(s): Pnueli, Amir; Zuck, Lenore
Abstract:
The work presented focuses on two problems, that of synthesizing systems from formal specifications, and that of formalizing REST -- a popular web applications' development pattern.
For the synthesis problem, we distinguish between the synchronous and the asynchronous case. For the former, we solve a problem concerning a fundamental flaw in specification construction in previous work. We continue with exploring effective synthesis of asynchronous systems (programs on multi-threaded systems). Two alternative models of asynchrony are presented, and shown to be equally expressive for the purpose of synthesis.
REST is a software architectural style used for the design of highly scalable web applications. Interest in REST has grown rapidly over the past decade. However, there is also considerable confusion surrounding REST: many examples of supposedly RESTful APIs violate key REST constraints. We show that the constraints of REST and of RESTful HTTP can be precisely formulated within temporal logic. This leads to methods for model checking and run-time verification of RESTful behavior. We formulate several relevant verification questions and analyze their complexity. -
TR2011-937
2011
Domain Decomposition Methods for Reissner-Mindlin Plates Discretized with the Falk-Tu Elements
Lee, Jong Ho
Abstract
|
PDF
Title: Domain Decomposition Methods for Reissner-Mindlin Plates Discretized with the Falk-Tu Elements
Author(s): Lee, Jong Ho
Abstract:
The Reissner-Mindlin plate theory models a thin plate with thickness t. The condition number of finite element approximations of this model deteriorates badly as the thickness t of the plate converges to 0. In this thesis, we develop an overlapping domain decomposition method for the Reissner-Mindlin plate model discretized by Falk-Tu elements with a convergence rate which does not deteriorate when t converges to 0. We use modern overlapping methods which use the Schur complements to define coarse basis functions and show that the condition number of this overlapping method is bounded by C(1 + H/delta )^3*(1 + log(H/h))^2. Here H is the maximum diameter of the subdomains, delta the size of overlap between subdomains, and h the element size. Numerical examples are provided to confirm the theory. We also modify the overlapping method to develop a BDDC method for the Reissner-Mindlin model. We establish numerically an extension lemma to obtain a constant bound and an edge lemma to obtain a C(1 + log(H/h))^2 bound. Given such bounds, the condition number of this BDDC method is shown to be bounded by C(1 + log(H/h))^2.
-
Ph.D. Thesis
2011
Adaptive Isotopic Approximation of Nonsingular Curves and Surfaces
Lin, Long
Abstract
|
PDF
Title: Adaptive Isotopic Approximation of Nonsingular Curves and Surfaces
Candidate: Lin, Long
Advisor(s): Yap, Chee
Abstract:
Consider the problem of computing isotopic approximations of nonsingular curves and surfaces that are implicitly represented by equations of the form f (X, Y )=0 and f (X,Y, Z)=0. Thisfundamentalproblem has seen much progress along several fronts, but we will focus on domain subdivision algorithms. Two algorithms in this area are from Snyder(1992) and Plantinga and Vegter(2004). We introduce a family of new algorithms that combines the advantages of these two algorithms: like Snyder, we use the parameterizability criterion for subdivision, and like Plantinga and Vegter, we exploit nonlocal isotopy.
We first apply our approach to curves, resulting in a more efficient algorithm. We then extend our approach to surfaces. The extension is by no means routine, as the correctness arguments and case analysis are more subtle. Also, a new phenomenon arises in which local rules for constructing surfaces are no longer sufficient.
We further extend our algorithms in two important and practical directions: first, we allow subdivision cells to be non squares or non cubes, with arbitrary but bounded aspect ratios: in 2D, we allow boxes to be split into 2 or 4 children; and in 3D, we allow boxes to be split into 2, 4 or 8 children. Second, we allow the inputregion-of-interest(ROI) to have arbitrary geometry represented by anquadtreeoroctree,aslongas the curves or surfaces has no singularities in the ROI and intersects the boundary of ROI transversally.
Our algorithm is numerical because our primitives are based on interval arithmetic and exact BigFloat numbers. It is practical, easy to implement exactly (compared to algebraic approaches) and does not suffer from implementation gaps (compared to geometric approaches). We report some very encouraging experimental results,showing that our algorithms can be much more efficient than the algorithms of Plantinga and Vegter(2D and 3D)and Snyder(2D only).
-
Ph.D. Thesis
2011
Real-Space Localization Methods for Minimizing the Kohn-Sham Energy
Millstone, Marc
Abstract
|
PDF
Title: Real-Space Localization Methods for Minimizing the Kohn-Sham Energy
Candidate: Millstone, Marc
Advisor(s): Overton, Michael
Abstract:
The combination of ever increasing computational power and new mathematical models has fundamentally changed the field of computational chemistry. One example of this is the use of new algorithms for computing the charge density of a molecular system from which one can predict many physical properties of the system.
This thesis presents two new algorithms for minimizing the Kohn-Sham energy, which is used to describe a system of non-interacting electrons through a set of single-particle wavefunctions. By exploiting a known localization region of the wavefunctions, each algorithm evaluates the Kohn-Sham energy function and gradient at a set of iterates that have a special sparsity structure. We have chosen to represent the problem in real-space using finite-differences, allowing us to efficiently evaluate the energy function and gradient using sparse linear algebra. Detailed numerical experiments are provided on a set of representative molecules demonstrating the performance and robustness of these methods.
-
Ph.D. Thesis
2011
Scoring-and-Unfolding Trimmed Tree Assembler: Algorithms for Assembling Genome Sequences Accurately and Efficiently
Narzisi, Giuseppe
Abstract
|
PDF
Title: Scoring-and-Unfolding Trimmed Tree Assembler: Algorithms for Assembling Genome Sequences Accurately and Efficiently
Candidate: Narzisi, Giuseppe
Advisor(s): Mishra, Bud
Abstract:
The recent advances in DNA sequencing technology and their many potential applications to Biology and Medicine have rekindled enormous interest in several classical algorithmic problems at the core of Genomics and Computational Biology: primarily, the whole-genome sequence assembly problem (WGSA). Two decades back, in the context of the Human Genome Project, the problem had received unprecedented scientific prominence: its computational complexity and intractability were thought to have been well understood; various competitive heuristics, thoroughly explored and the necessary software, properly implemented and validated. However, several recent studies, focusing on the experimental validation of de novo assemblies, have highlighted several limitations of the current assemblers.
Intrigued by these negative results, this dissertation reinvestigates the algorithmic techniques required to correctly and efficiently assemble genomes. Mired by its connection to a well-known NP-complete combinatorial optimization problem, historically, WGSA has been assumed to be amenable only to greedy and heuristic methods. By placing efficiency as their priority, these methods opted to rely on local searches, and are thus inherently approximate, ambiguous or error-prone. This dissertation presents a novel sequence assembler, SUTTA, that dispenses with the idea of limiting the solutions to just the approximated ones, and instead favors an approach that could potentially lead to an exhaustive (exponential-time) search of all possible layouts but tames the complexity through constrained search (Branch-and-Bound) and quick identification and pruning of implausible solutions.
Complementary to this problem is the task of validating the generated assemblies. Unfortunately, no commonly accepted method exists yet and widely used metrics to compare the assembled sequences emphasize only size, poorly capturing quality and accuracy. This dissertation also addresses these concerns by developing a more comprehensive metric, the Feature-Response Curve, that, using ideas from classical ROC (receiver-operating characteristic) curve, more faithfully captures the trade-off between contiguity and quality.
Finally, this dissertation demonstrates the advantages of a complete pipeline integrating base-calling (TotalReCaller) with assembly (SUTTA) in a Bayesian manner.
-
TR2011-942
2011
Domain Decomposition Methods for Raviart-Thomas Vector Fields
Oh, Duk-Soon
Abstract
|
PDF
Title: Domain Decomposition Methods for Raviart-Thomas Vector Fields
Author(s): Oh, Duk-Soon
Abstract:
Raviart-Thomas finite elements are very useful for problems posed in H(div) since they are H(div)-conforming. We introduce two domain decomposition methods for solving vector field problems posed in H(div) discretized by Raviart-Thomas finite elements.
A two-level overlapping Schwarz method is developed. The coarse part of the preconditioner is based on energy-minimizing extensions and the local parts consist of traditional solvers on overlapping subdomains. We prove that our method is scalable and that the condition number grows linearly with the logarithm of the number of degrees of freedom in the individual subdomains and linearly with the relative overlap between the overlapping subdomains. The condition number of the method is also independent of the values and jumps of the coefficients across the interface between subdomains. We provide numerical results to support our theory.
We also consider a balancing domain decomposition by constraints (BDDC) method. The BDDC preconditioner consists of a coarse part involving primal constraints across the interface between subdomains and local parts related to the Schur complements corresponding to the local subdomain problems. We provide bounds of the condition number of the preconditioned linear system and suggest that the condition number has a polylogarithmic bound in terms of the number of degrees of freedom in the individual subdomains from our numerical experiments for arbitrary jumps of the coefficients across the subdomain interfaces.
-
TR2011-945
2011
From a Calculus to an Execution Environment for Stream Processing
Soulé, Robert;
Hirzel, Martin; Gedik, Bugra; Grimm, Robert
Abstract
|
PDF
Title: From a Calculus to an Execution Environment for Stream Processing
Author(s): Soulé, Robert; Hirzel, Martin; Gedik, Bugra; Grimm, Robert
Abstract:
At one level, this paper is about River, a virtual execution environment for stream processing. Stream processing is a paradigm well-suited for many modern data processing systems that ingest high-volume data streams from the real world, such as audio/video streaming, high-frequency trading, and security monitoring. One attractive property of stream processing is that it lends itself to parallelization on multi-cores, and even to distribution on clusters when extreme scale is required. Stream processing has been coevolved by several communities, leading to diverse languages with similar core concepts. Providing a common execution environment reduces language development effort and increases portability. We designed River as a practical realization of Brooklet, a calculus for stream processing. So at another level, this paper is about a journey from theory (the calculus) to practice (the execution environment). The challenge is that, by definition, a calculus abstracts away all but the most central concepts. Hence, there are several research questions in concretizing the missing parts, not to mention a significant engineering effort in implementing them. But the effort is well worth it, because the benefit of using a calculus as a foundation is that it yields clear semantics and proven correctness results.
-
Ph.D. Thesis
2011
Cryptographic Resilience to Continual Information Leakage
Wichs, Daniel
Abstract
|
PDF
Title: Cryptographic Resilience to Continual Information Leakage
Candidate: Wichs, Daniel
Advisor(s): Dodis, Yevgeniy
Abstract:
We study the question of achieving cryptographic security on devices that leak information about their internal secret state to an external attacker.This study is motivated by the prevalence of side-channel attacks, where the physical characteristics of a computation (e.g. timing, power-consumption, temperature, radiation, acoustics, etc.) can be measured, and may reveal useful information about the internal state of a device. Since some such leakage is inevitably present in almost any physical implementation, we believe that this problem cannot just be addressed by physical countermeasures alone. Instead, it should already be taken into account when designing the mathematical specification of cryptographic primitives and included in the formal study of their security.
In this thesis, we propose a new formal framework for modeling the leakage available to an attacker. This framework, called the continual leakage model, assumes that an attacker can continually learn arbitrary information about the internal secret state of a cryptographic scheme at any point in time, subject only to the constraint that the rate of leakage is bounded. More precisely, our model assumes some abstract notion of time periods. In each such period, the attacker can choose to learn arbitrary functions of the current secret state of the scheme, as long as the number of output bits leaked is not too large. In our solutions, cryptographic schemes will continually update their internal secret state at the end of each time period. This will ensure that leakage observed in different time periods cannot be meaningfully combined to break the security of the cryptosystem. Although these updates modify the secret state of the cryptosystem, the desired functionality of the scheme is preserved, and the users can remain oblivious to these updates. We construct signatures, encryption, and secret sharing/storage schemes in this model.
-
Ph.D. Thesis
2011
Surface Representation of Particle Based Fluids
Yu, Jihun
Abstract
|
PDF
Title: Surface Representation of Particle Based Fluids
Candidate: Yu, Jihun
Advisor(s): Yap, Chee
Abstract:
In this thesis, we focus on surface representation for particle-based fluid simulators such as Smoothed Particle Hydrodynamics (SPH). We first present a new surface reconstruction algorithm which formulates the implicit function as a sum of anisotropic smoothing kernels. The direction of anisotropy at a particle is determined by performing Weighted Principal Component Analysis (WPCA) over the neighboring particles. In addition, we perform a smoothing step that re-positions the centers of these smoothing kernels. Since these anisotropic moothing kernels capture the local particle distributions more accurately, our method has advantages over existing methods in representing smooth surfaces, thin streams and sharp features of fluids. This method is fast, easy to implement, and the results demonstrate a significant improvement in the quality of reconstructed surfaces as compared to existing methods. Next,we introduce the idea of using an explicit triangle mesh to track the air/liquid interface in a SPH simulator.
Once an initial surface mesh is created, this mesh is carried forward in time using nearby particle velocities to advect the mesh vertices. The mesh connectivity remains mostly unchanged across time-steps; it is only modified locally for topology change events or for the improvement of triangle quality. In order to ensure that the surface mesh does not diverge from the underlying particle simulation, we periodically project the mesh surface onto an implicit surface defined by the physics simulation. The mesh surface presents several advantages over previous SPH surface tracking techniques: A new method for surface tension calculations clearly outperforms the state of the art in SPH surface tension for computer graphics. A new method for tracking detailed surface information (like colors) is less susceptible to numerical diffusion than competing techniques. Finally, a temporally-coherent surface mesh allows us to simulate high-resolution surface wave dynamics without being limited by the particle resolution of the SPH simulation.
-
TR2010-931
2010
Design and Results of the 4th Annual Satisfiability Modulo Theories Competition (SMT-COMP 2008)
Barrett, Clark;
Deters, Morgan; Oliveras, Albert; Stump, Aaron
Abstract
|
PDF
Title: Design and Results of the 4th Annual Satisfiability Modulo Theories Competition (SMT-COMP 2008)
Author(s): Barrett, Clark; Deters, Morgan; Oliveras, Albert; Stump, Aaron
Abstract:
The Satisfiability Modulo Theories Competition (SMT-COMP) is an annual competition aimed at stimulating the advance of the state-of-the-art techniques and tools developed by the Satisfiability Modulo Theories (SMT) community. As with the first three editions, SMT-COMP 2008 was held as a satellite event of CAV 2008, held July 7-14, 2008. This report gives an overview of the rules, competition format, benchmarks, participants and results of SMT-COMP 2008.
-
M.S. Thesis
2010
DTAC: A method for planning to claim in Bridge
Bethe, Paul
Abstract
|
PDF
Title: DTAC: A method for planning to claim in Bridge
Candidate: Bethe, Paul
Advisor(s): Davis, Ernest
Abstract:
The DTAC program uses depth-first search to find an unconditional claim in bridge; that is, a line of play that is guaranteed to succeed whatever the distribution of the outstanding cards among the defenders. It can also find claims that are guaranteed to succeed under specified assumptions about the distribution of the defenders. cards. Lastly, DTAC can find a claim which requires losing a trick at some point. Using transposition tables to detect repeated positions, DTAC can carry out a complete DFS to find an unconditional ordered claim in less than 0.001 seconds on average, and less than 1 second for claims which lose a trick. The source code for DTAC is available from: http://cs.nyu.edu/~pmb309/DTAC.html
-
Ph.D. Thesis
2010
On the Randomness Requirements for Privacy
Bosley, Carleton
Abstract
|
PDF
Title: On the Randomness Requirements for Privacy
Candidate: Bosley, Carleton
Advisor(s): Dodis, Yevgeniy
Abstract:
Most cryptographic primitives require randomness (for example, to generate secret keys). Usually, one assumes that perfect randomness is available, but, conceivably, such primitives might be built under weaker, more realistic assumptions. This is known to be achievable for many authentication applications, when entropy alone is typically sufficient. In contrast, all known techniques for achieving privacy seem to fundamentally require (nearly) perfect randomness. We ask the question whether this is just a coincidence, or, perhaps, privacy inherently requires true randomness?
We completely resolve this question for information-theoretic private-key encryption, where parties wish to encrypt a b-bit value using a shared secret key sampled from some imperfect source of randomness S. Our technique also extends to related primitives which are sufficiently binding and hiding, including computationally secure commitments and public-key encryption.
Our main result shows that if such n-bit source S allows for a secure encryption of b bits, where b > log n, then one can deterministically extract nearly b almost perfect random bits from S . Further, the restriction that b > log n is nearly tight: there exist sources S allowing one to perfectly encrypt (log n - log log n) bits, but not to deterministically extract even a single slightly unbiased bit.
Hence, to a large extent, true randomness is inherent for encryption: either the key length must be exponential in the message length b, or one can deterministically extract nearly b almost unbiased random bits from the key. In particular, the one-time pad scheme is essentially "universal".
-
Ph.D. Thesis
2010
Machine Learning Approaches to Gene Duplication and Transcription Regulation
Chen, Huang-Wen
Abstract
|
PDF
Title: Machine Learning Approaches to Gene Duplication and Transcription Regulation
Candidate: Chen, Huang-Wen
Advisor(s): Shasha, Dennis
Abstract:
Gene duplication can lead to genetic redundancy or functional divergence, when duplicated genes evolve independently or partition the original function. In this dissertation, we employed machine learning approaches to study two different views of this problem: 1) Redundome, which explored the redundancy of gene pairs in the genome of Arabidopsis thaliana, and 2) ContactBind, which focused on functional divergence of transcription factors by mutating contact residues to change binding affinity.
In the Redundome project, we used machine learning techniques to classify gene family members into redundant and non-redundant gene pairs in Arabidopsis thaliana, where sufficient genetic and genomic data is available. We showed that Support Vector Machines were two-fold more precise than single attribute classifiers, and performed among the best within other machine learning algorithms. Machine learning methods predict that about half of all genes in Arabidopsis showed the signature of predicted redundancy with at least one but typically less than three other family members. Interestingly, a large proportion of predicted redundant gene pairs were relatively old duplications (e.g., Ks>1), suggesting that redundancy is stable over long evolutionary periods. The genome-wide predictions were plot with similarity trees based on ClustalW alignment scores, and can be accessed at http://redundome.bio.nyu.edu .
In the ContactBind project, we use Bayesian networks to model dependences between contact residues in transcription factors and binding site sequences. Based on the models learned from various binding experiments, we predicted binding motifs and their locations on promoters for three families of transcription factors in three species. The predictions are publicly available at http://contactbind.bio.nyu.edu . The website also provides tools to predict binding motifs and their locations for novel protein sequences of transcription factors. Users can construct their Bayesian networks for new families once such a familial binding data is available.
-
Ph.D. Thesis
2010
New Privacy-Preserving Architectures for Identity-/Attribute-based Encryption
Chow, Sze Ming
Abstract
|
PDF
Title: New Privacy-Preserving Architectures for Identity-/Attribute-based Encryption
Candidate: Chow, Sze Ming
Advisor(s): Dodis, Yevgeniy; Shoup, Victor
Abstract:
The notion of identity-based encryption (IBE) was proposed as an economical alternative to public-key infrastructures. IBE is also a useful building block in various cryptographic primitives such as searchable encryption. A generalization of IBE is attribute-based encryption (ABE). A major application of ABE is fine-grained cryptographic access control of data. Research on these topics is still actively continuing.
However, security and privacy of IBE and ABE are hinged on the assumption that the authority which setups the system is honest. Our study aims to reduce this trust assumption.
The inherent key escrow of IBE has sparkled numerous debates in the cryptography/security community. A curious key generation center (KGC) can simply generate the user's private key to decrypt a ciphertext. However, can a KGC still decrypt if it does not know the intended recipient of the ciphertext? This question is answered by formalizing KGC anonymous ciphertext indistinguishability (ACI-KGC). All existing practical pairing-based IBE schemes without random oracles do not achieve this notion. In this thesis, we propose an IBE scheme with ACI-KGC, and a new system architecture with an anonymous secret key generation protocol such that the KGC can issue keys to authenticated users without knowing the list of users' identities. This also matches the practice that authentication should be done with the local registration authorities. Our proposal can be viewed as mitigating the key escrow problem in a new dimension.
For ABE, it is not realistic to trust a single authority to monitor all attributes and hence distributing control over many attribute-authorities is desirable. A multi-authority ABE scheme can be realized with a trusted central authority (CA) which issues part of the decryption key according to a user's global identifier (GID). However, this CA may have the power to decrypt every ciphertext, and the use of a consistent GID allowed the attribute-authorities to collectively build a full profile with all of a user's attributes. This thesis proposes a solution without the trusted CA and without compromising users' privacy, thus making ABE more usable in practice.
Underlying both contributions are our new privacy-preserving architectures enabled by borrowing techniques from anonymous credential.
-
TR2010-930
2010
Coordination Mechanisms for Weighted Sum of Completion Times
Cole, Richard;
Gkatzelis, Vasilis; Mirrokni, Vahab
Abstract
|
PDF
Title: Coordination Mechanisms for Weighted Sum of Completion Times
Author(s): Cole, Richard; Gkatzelis, Vasilis; Mirrokni, Vahab
Abstract:
We study policies aiming to minimize the weighted sum of completion times of jobs in the context of coordination mechanisms for selfish scheduling problems. Our goal is to design local policies that achieve a good price of anarchy in the resulting equilibria for unrelated machine scheduling. In short, we present the first constant-factor-approximate coordination mechanisms for this model.
First, we present a generalization of the ShortestFirst policy for weighted jobs, called SmithRule; we prove that it achieves an approximation ratio of 4 and we show that any set of non-preemptive ordering policies can result in equilibria with approximation ratio at least 3 even for unweighted jobs. Then, we present ProportionalSharing, a preemptive strongly local policy that beats this lower bound of 3; we show that this policy achieves an approximation ratio of 2.61 for the weighted sum of completion times and that the EqualSharing policy achieves an approximation ratio of 2.5 for the (unweighted) sum of completion times. Furthermore, we show that ProportionalSharing induces potential games (in which best-response dynamics converge to pure Nash equilibria).
All of our upper bounds are for the robust price of anarchy, defined by Roughgarden [36], so they naturally extend to mixed Nash equilibria, correlated equilibria, and regret minimization dynamics. Finally, we prove that our price of anarchy bound for ProportionalSharing can be used to design a new combinatorial constant-factor approximation algorithm minimizing weighted completion time for unrelated machine scheduling.
-
Ph.D. Thesis
2010
Tools and Techniques for the Sound Verification of Low Level Code
Conway, Christopher L.
Abstract
|
PDF
Title: Tools and Techniques for the Sound Verification of Low Level Code
Candidate: Conway, Christopher L.
Advisor(s): Barrett, Clark
Abstract:
Software plays an increasingly crucial role in nearly every facet of modern life, from communications infrastructure to the control systems in automobiles, airplanes, and power plants. To achieve the highest degree of reliability for the most critical pieces of software, it is necessary to move beyond ad hoc testing and review processes towards verification---to prove using formal methods that a piece of code exhibits exactly those behaviors allowed by its specification and no others.
A significant portion of the existing software infrastructure is written in low-level languages like C and C++. Features of these language present significant verification challenges. For example, unrestricted pointer manipulation means that we cannot prove even the simplest properties of programs without first collecting precise information about potential aliasing relationships between variables.
In this thesis, I present several contributions. The first is a general framework for combining program analyses that are only conditionally sound. Using this framework, I show it is possible to design a sound verification tool that relies on a separate, previously-computed pointer analysis.
The second contribution of this thesis is Cascade, a multi-platform, multi-paradigm framework for verification. Cascade includes a support for precise analysis of low-level C code, as well as for higher-level languages such as SPL.
Finally, I describe a novel technique for the verification of datatype invariants in low-level systems code. The programmer provides a high-level specification for a low-level implementation in the form of inductive datatype declarations and code assertions. The connection between the high-level semantics and the implementation code is then checked using bit-precise reasoning. An implementation of this datatype verification technique is available as a Cascade module.
-
Ph.D. Thesis
2010
Probabilistic and Topological methods in Computational Geometry
Dhandapani, Raghavan
Abstract
|
PDF
Title: Probabilistic and Topological methods in Computational Geometry
Candidate: Dhandapani, Raghavan
Advisor(s): Pach, Janos
Abstract:
We consider four problems connected by the common thread of geometry. The first three involve problems and algorithms that arise in applications that apriori do not involve geometry but this turns out to be the right language for visualizing and analyzing them. In the fourth, we generalize some well known results in geometry to the topological plane. The techniques we use come from probability and topology.
First, we consider two algorithms that work well in practice but the theoretical mechanism behind whose success is not very well understood.
Greedy routing is a routing mechanism that is commonly used in wireless sensor networks. While routing on the Internet uses standard established protocols, routing in ad-hoc networks with little structure (like sensor networks) is more difficult. Practitioners have devised algorithms that work well in practice, however they were no known theoretical guarantees. We provide the first such result in this area by showing that greedy routing can be made to work on Planar triangulations.
Linear Programming is a technique for optimizing a linear function subject to linear constraints. Simplex Algorithms are a family of algorithms that have proven quite successful in solving Linear Programs in practice. However, examples of Linear Programs on which these algorithms are very inefficient have been obtained by researchers. In order to explain this discrepancy between theory and practice, many authors have shown that Simplex Algorithms are efficient in expectation on randomized Linear Programs. We strengthen these results by proving a partial concentration bound for the Shadow Vertex Simplex Algorithm.
Next, we point out a limitation in an algorithm that is used commonly by practitioners and suggest a way of overcoming this.
Recommendation Systems are algorithms that are used to recommend goods (books, movies etc.) to users based on the similarities between their past preferences and those of other users. Low Rank Approximation is a common method used for this. We point out a common limitation of this method in the presence of ill-conditioning: the presence of multiple local minima. We also suggest a simple averaging based technique to overcome this limitation.
Finally, we consider some basic results in convexity like Radon's, Helly's and Caratheodory's theorems and generalize them to the topological plane, i.e., a plane which has the concept of a linear path which is analogous to a straight line but no notion of metric or distances.
-
TR2010-936
2010
An Iterative Substructuring Algorithm for Two-dimensional Problems in H(curl)
Dohrmann, Clark R.;
Widlund, Olof B.
Abstract
|
PDF
Title: An Iterative Substructuring Algorithm for Two-dimensional Problems in H(curl)
Author(s): Dohrmann, Clark R.; Widlund, Olof B.
Abstract:
A domain decomposition algorithm, similar to classical iterative substructuring algorithms, is presented for two-dimensional problems in the space H0(curl). It is defined in terms of a coarse space and local subspaces associated with individual edges of the subdomains into which the domain of the problem has been subdivided. The algorithm differs from others in three basic respects. First, it can be implemented in an algebraic manner that does not require access to individual subdomain matrices or a coarse discretization of the domain; this is in contrast to algorithms of the BDDC, FETIâDP, and classical twoâlevel overlapping Schwarz families. Second, favorable condition number bounds can be established over a broader range of subdomain material properties than in previous studies. Third, we are able to develop theory for quite irregular subdomains and bounds for the condition number of our preconditioned conjugate gradient algorithm, which depend only on a few geometric parameters.
The coarse space for the algorithm is based on simple energy minimization concepts, and its dimension equals the number of subdomain edges. Numerical results are presented which confirm the theory and demonstrate the usefulness of the algorithm for a variety of mesh decompositions and distributions of material properties.
-
Ph.D. Thesis
2010
Semi-Supervised Learning via Generalized Maximum Entropy
Erkan, Ayse Naz
Abstract
|
PDF
Title: Semi-Supervised Learning via Generalized Maximum Entropy
Candidate: Erkan, Ayse Naz
Advisor(s): LeCun, Yann
Abstract:
Maximum entropy (MaxEnt) framework has been studied extensively in the supervised setting. Here, the goal is to find a distribution p, that maximizes an entropy function while enforcing data constraints so that the expected values of some (pre-defined) features with respect to p, match their empirical counterparts approximately. Using different entropy measures, different model spaces for p and different approximation criteria for the data constraints yields a family of discriminative supervised learning methods (e.g., logistic regression, conditional random fields, least squares and boosting). This framework is known as the generalized maximum entropy framework.
Semi-supervised learning (SSL) has emerged in the last decade as a promising field that enables utilizing unlabeled data along with labeled data so as to increase the accuracy and robustness of inference algorithms. However, most SSL algorithms to date have had trade-offs, for instance in terms of scalability or applicability to multi-categorical data.
In this thesis, we extend the generalized MaxEnt framework to develop a family of novel SSL algorithms using two different approaches: i. Introducing Similarity Constraints We incorporate unlabeled data via modifications to the primal MaxEnt objective in terms of additional potential functions. A potential function stands for a closed proper convex function that can take the form of a constraint and/or a penalty representing our structural assumptions on the data geometry. Specifically, we impose similarity constraints as additional penalties based on the semi-supervised smoothness assumption; i.e., we restrict the generalized MaxEnt problem such that similar samples have similar model outputs. ii. Augmenting Constraints on Model Features We incorporate unlabeled data to enhance the estimates on the model and empirical expectations based on our assumptions on the data geometry.
In particular, we derive the semi-supervised formulations for three specific instances of the generalized MaxEnt on conditional distributions, namely logistic regression and kernel logistic regression for multi-class problems, and conditional random fields for structured output prediction problems. A thorough empirical evaluation on standard data sets that are widely used in the literature demonstrates the validity and competitiveness of the proposed algorithms. In addition to these benchmark data sets, we apply our approach to two real-life problems: i. vision based robot grasping, and ii. remote sensing image classification, where the scarcity of the labeled training samples is the main bottleneck in the learning process. For the particular case of grasp learning, we propose a combination of semi-supervised learning and active learning, another sub-field of machine learning that is focused on the scarcity of labeled samples, when the problem setup is suitable for incremental labeling.
The novel SSL algorithms proposed in this thesis have numerous advantages over the existing semi-supervised algorithms as they yield convex, scalable, inherently multi-class loss functions that can be kernelized naturally.
-
TR2010-929
2010
Information Extraction on High-School Level Chemistry Labs
Galron, Daniel
Abstract
|
PDF
Title: Information Extraction on High-School Level Chemistry Labs
Author(s): Galron, Daniel
Abstract:
In this report we present a feasibility study on automatically interpreting instructions found in a set of high school chemistry labs, and discuss the role of deep domain knowledge in the interpretation. We define the task of sentence-level interpretation as the extraction of symbolic representations of the sentence semantics. In the broader scope, the sentence-level semantics of a particular sentence will be resolved with semantics from other sentences in the lab along with domain knowledge to disambiguate and reason about a physical system. The task of general automatic sentence-level interpretation is a difficult one. The general problem is not very well defined in the natural language processing research community, and few researchers have studied the problem. The common practice is to decompose the problem into subtasks, such as resolving coreferences of noun phrases, labeling the semantic roles of arguments to predicates, and identifying word categories. We describe a pipeline combining the subtasks described, along with parsing, to create a system capable of extracting sentence-level semantics. All the systems used for the subtask are found off-the-shelf, and we should stress that such a system will be highly-error prone for reasons we discuss. Finally, we do a close study of the chemistry lab corpus, and analyze each instruction to determine the feasibility of its automatic interpretation and the role of deep domain knowledge in its disambiguation and understanding.
-
Ph.D. Thesis
2010
Solving Quantified First Order Formulas in Satisfiability Modulo Theories
Ge, Yeting
Abstract
|
PDF
Title: Solving Quantified First Order Formulas in Satisfiability Modulo Theories
Candidate: Ge, Yeting
Advisor(s): Barrett, Clark
Abstract:
Design errors in computer systems, i.e. bugs, can cause inconvenience, loss of data and time, and in some cases catastrophic damages. One approach for improving design correctness is formal methods: techniques aiming at mathematically establishing that a piece of hardware or software satisfies certain properties. For some industrial cases in which formal methods are utilized, quantified first order formulas in satisfiability modulo theories (SMT) are useful. This dissertation presents several novel techniques for solving quantified formulas in SMT.
In general, deciding a quantified formula in SMT is undecidable. The practical approach for general quantifier reasoning in SMT is heuristics-based instantiation. This dissertation proposes a number of new heuristics that solves several challenges. Experimental results show that with the new heuristics a significant number of more benchmarks can be solved than before.
When only consider formulas within certain fragments of first order logic, it is possible to have complete algorithms based on instantiation. We propose several new fragments, and we prove that formulas in these fragments can be solved by a complete algorithm based on instantiation. For satisfiable quantified formulas in these fragments, we show how to construct the models.
As SMT solvers grow in complexity, the correctness of SMT solvers become questionable. A practical method to improve the correctness is to check the proofs from SMT solvers. We propose a proof translator that translates proofs from SMT solver CVC3 into a trusted solver HOL Light that actually checks the proofs. Experiments with the proof translator discover a faulty proof rule in CVC3 and two MIT-labeled quantified benchmarks in the SMT benchmark library SMT-LIB.
-
TR2010-922
2010
Polite Theories Revisited
Jovanovic, Dejan;
Barrett, Clark
Abstract
|
PDF
Title: Polite Theories Revisited
Author(s): Jovanovic, Dejan; Barrett, Clark
Abstract:
The classic method of Nelson and Oppen for combining decision procedures requires the theories to be stably-infnite. Unfortunately, some important theories do not fall into this category (e.g. the theory of bit-vectors). To remedy this problem, previous work introduced the notion of polite theories. Polite theories can be combined with any other theory using an extension of the Nelson-Oppen approach. In this paper we revisit the notion of polite theories, fxing a subtle flaw in the original definition. We give a new combination theorem which specifies the degree to which politeness is preserved when combining polite theories. We also give conditions under which politeness is preserved when instantiating theories by identifying two sorts. These results lead to a more general variant of the theorem for combining multiple polite theories.
-
M.S. Thesis
2010
TestRig: A Platform independent system testing tool
Kaul, Vaibhav
Abstract
|
PDF
Title: TestRig: A Platform independent system testing tool
Candidate: Kaul, Vaibhav
Advisor(s): Shasha, Dennis
Abstract:
The goal of the TestRig software is to give a test engineer a fixed interface to help him with system/integration testing of software systems. TestRig is platform independent and can be utilized to test software systems coded with any programming language. In addition to doing that, it provides templates and examples of using various Open Source testing tools to help a user design their test cases. TestRig has been designed keeping in mind the current scenario in software development where complex systems are often created using multiple programming languages across different platforms. The challenge is to have a defined set of rules that are able to test any such system. The software makes use of various open source testing tools to run tests and verify results, which enables a user to test a system at different levels such as Performance Testing, Blackbox Testing, and User Acceptance Testing. TestRig is open source and utilizes a programmer’s creativity to test across multiple scenarios. The thesis will show how different software systems have been tested using TestRig.
-
Ph.D. Thesis
2010
An Algorithmic Enquiry Concerning Causality
Kleinberg, Samantha
Abstract
|
PDF
Title: An Algorithmic Enquiry Concerning Causality
Candidate: Kleinberg, Samantha
Advisor(s): Mishra, Bhubaneswar
Abstract:
In many domains we face the problem of determining the underlying causal structure from time-course observations of a system. Whether we have neural spike trains in neuroscience, gene expression levels in systems biology, or stock price movements in finance, we want to determine why these systems behave the way they do. For this purpose we must assess which of the myriad possible causes are significant while aiming to do so with a feasible computational complexity. At the same time, there has been much work in philosophy on what it means for something to be a cause, but comparatively little attention has been paid to how we can identify these causes. Algorithmic approaches from computer science have provided the first steps in this direction, but fail to capture the complex, probabilistic and temporal nature of the relationships we seek.
This dissertation presents a novel approach to the inference of general (type-level) and singular (token-level) causes. The approach combines philosophical notions of causality with algorithmic approaches built on model checking and statistical techniques for false discovery rate control. By using a probabilistic computation tree logic to describe both cause and effect, we allow for complex relationships and explicit description of the time between cause and effect as well as the probability of this relationship being observed (e.g. "a and b until c, causing d in 10-20 time units"). Using these causal formulas and their associated probabilities, we develop a novel measure for the significance of a cause for its effect, thus allowing discovery of those that are statistically interesting, determined using the concepts of multiple hypothesis testing and false discovery control. We develop algorithms for testing these properties in time-series observations and for relating the inferred general relationships to token-level events (described as sequences of observations). Finally, we illustrate these ideas with example data from both neuroscience and finance, comparing the results to those found with other inference methods. The results demonstrate that our approach achieves superior control of false discovery rates, due to its ability to appropriately represent and infer temporal information.
-
TR2010-926
2010
The Temporal Logic of Token Causes
Kleinberg, Samantha;
Mishra, Bud
Abstract
|
PDF
Title: The Temporal Logic of Token Causes
Author(s): Kleinberg, Samantha; Mishra, Bud
Abstract:
While type causality helps understand general relationships such as the etiology of a disease (smoking causing lung cancer), token causality aims to explain causal connections in specific instantiated events, such as the diagnosis of a specific patient (Ravi's developing lung cancer after a 20-year smoking habit). Understanding why something happened, as in these examples, is central to reasoning in such diverse cases as the diagnosis of patients, understanding why the US financial market collapsed in 2007 and finding a causal explanation for Obama's victory over Clinton in the US primary. However, despite centuries of work in philosophy and decades of research in computer science, the problem of how to rigorously formalize token causality and how to automate such reasoning has remained unsolved. In this paper, we show how to use type-level causal relationships, represented as temporal logic formulas, together with philosophical principles, to reason about these token-level cases. Finally, we show how this method can correctly reason about examples that have traditionally proven difficult for both computational and philosophical theories to handle.
-
TR2010-932
2010
An overlapping domain decomposition method for the Reissner-Mindlin Plate with the Falk-Tu Elements
Lee, Jong Ho
Abstract
|
PDF
Title: An overlapping domain decomposition method for the Reissner-Mindlin Plate with the Falk-Tu Elements
Author(s): Lee, Jong Ho
Abstract:
The Reissner-Mindlin plate theory models a thin plate with thickness t. The condition numbers of finite element approximations of this model deteriorate badly as the thickness t of the plate converges to 0. In this paper, we develop an overlapping domain decomposition method for the Reissner-Mindlin plate model discretized by the Falk-Tu elements with the convergence rate which does not deteriorate when t converges to 0. It is shown that the condition number of this overlapping method is bounded by C(1+ H/delta)^3(1 +logH/h)^2. Here H is the maximum diameter of the subdomains, delta the size of overlap between subdomains, and h the element size. Numerical examples are provided to confirm the theory.
-
Ph.D. Thesis
2010
Time Series Modeling with Hidden Variables and Gradient-Based Algorithms
Mirowski, Piotr
Abstract
|
PDF
Title: Time Series Modeling with Hidden Variables and Gradient-Based Algorithms
Candidate: Mirowski, Piotr
Advisor(s): LeCun, Yann
Abstract:
We collect time series from real-world phenomena, such as gene interactions in biology or word frequencies in consecutive news articles. However, these data present us with an incomplete picture, as they result from complex dynamical processes involving unobserved state variables. Research on state-space models is motivated by simultaneously trying to infer hidden state variables from observations, as well as learning the associated dynamic and generative models.
I have developed a tractable, gradient-based method for training Dynamic Factor Graphs (DFG) with continuous latent variables. A DFG consists of (potentially nonlinear) factors modeling joint probabilities between hidden and observed variables. The DFG assigns a scalar energy to each configuration of variables, and a gradient-based inference procedure finds the minimum-energy state sequence for a given observation sequence. We approximate maximum likelihood learning by minimizing the expected energy over training sequences with respect to the factors' parameters. These alternated inference and parameter updates constitute a deterministic EM-like procedure.
Using nonlinear factors such as deep, convolutional networks, DFGs were shown to reconstruct chaotic attractors, to outperform a time series prediction benchmark, and to successfully impute motion capture data where a large number of markers were missing. In a joint work with the NYU Plant Systems Biology Lab, DFGs have been subsequently employed to the discovery of gene regulation networks by learning the dynamics of mRNA expression levels.
DFGs have also been extended into a deep auto-encoder architecture, and used on time-stamped text documents, with word frequencies as inputs. We focused on collections of documents that exhibit a structure over time. Working as dynamic topic models, DFGs could extract a latent trajectory from consecutive political speeches; applied to news articles, they achieved state-of-the-art text categorization and retrieval performance.
Finally, I used an embodiment of DFGs to evaluate the likelihood of discrete sequences of words in text corpora, relying on dynamics on word embeddings. Collaborating with AT&T; Labs Research on a project in speech recognition, we have improved on existing continuous statistical language models by enriching them with word features and long-range topic dependencies.
-
TR2010-933
2010
An Overlapping Schwarz Algorithm for Raviart-Thomas Vector Fields with Discontinuous Coefficients
Oh, Duk-Soon
Abstract
|
PDF
Title: An Overlapping Schwarz Algorithm for Raviart-Thomas Vector Fields with Discontinuous Coefficients
Author(s): Oh, Duk-Soon
Abstract:
Overlapping Schwarz methods form one of two major families of domain decomposition methods. We consider a two-level overlapping Schwarz method for Raviart-Thomas vector fields. The coarse part of the preconditioner is based on the energy-minimizing extensions and the local parts are based on traditional solvers on overlapping subdomains. We show that the condition number grows linearly with the logarithm of the number of degrees of freedom in the individual subdomains and linearly with the relative overlap between the overlapping subdomains. The condition number of the method is also independent of the values and jumps of the coefficients. Numerical results for 2D and 3D problems, which support the theory, are also presented.
-
TR2010-928
2010
BDDC preconditioners for spectral element discretizations of almost incompressible elasticity in three dimensions
Pavarino, Luca F.;
Widlund, Olof B.; Zampini, Stefano
Abstract
|
PDF
Title: BDDC preconditioners for spectral element discretizations of almost incompressible elasticity in three dimensions
Author(s): Pavarino, Luca F.; Widlund, Olof B.; Zampini, Stefano
Abstract:
BDDC algorithms are constructed and analyzed for the system of almost incompressible elasticity discretized with Gauss-Lobatto-Legendre spectral elements in three dimensions. Initially mixed spectral elements are employed to discretize the almost incompressible elasticity system, but a positive definite reformulation is obtained by eliminating all pressure degrees of freedom interior to each subdomain into which the spectral elements have been grouped. Appropriate sets of primal constraints can be associated with the subdomain vertices, edges, and faces so that the resulting BDDC methods have a fast convergence rate independent of the almost incompressibility of the material. In particular, the condition number of the BDDC preconditioned operator is shown to depend only weakly on the polynomial degree \(n\), the ratio \(H/h\) of subdomain and element diameters, and the inverse of the inf-sup constants of the subdomains and the underlying mixed formulation, while being scalable, i.e., independent of the number of subdomains and robust, i.e., independent of the Poisson ratio and Young's modulus of the material considered. These results also apply to the related FETI-DP algorithms defined by the same set of primal constraints. Numerical experiments carried out on parallel computing systems confirm these results.
-
Ph.D. Thesis
2010
Structure Prediction and Visualization in Molecular Biology
Poultney, Christopher
Abstract
|
PDF
Title: Structure Prediction and Visualization in Molecular Biology
Candidate: Poultney, Christopher
Advisor(s): Shasha, Dennis
Abstract:
The tools of computer science can be a tremendous help to the working biologist. Two broad areas where this is particularly true are visualization and prediction. In visualization, the size of the data involved often makes meaningful exploration of the data and discovery of salient features difficult and time-consuming. Similarly, intelligent prediction algorithms can greatly reduce the lab time required to achieve significant results, or can reduce an intractable space of potential experiments to a tractable size.
Whereas the thesis discusses both a visualization technique and a machine learning problem, the thesis presentation will focus exclusively on the machine learning problem: prediction of temperature-sensitive mutations from protein structure. Temperature-sensitive mutations are a tremendously valuable research tool particularly for studying genes such as yeast essentially genes. To date, most methods for generating temperature-sensitive mutations involve large-scale random mutations followed by an intensive screening and characterization process. While there have been successful efforts to improve this process by rational design of temperature-sensitive proteins, surprisingly little work has been done in the area of predicting those mutations that will exhibit a temperature-sensitive phenotype. We describe a system that, given the structure of a protein of interest, uses a combination of protein structure prediction and machine learning to provide a ranked "top 5" list of likely candidates for temperature-sensitive mutations.
-
TR2010-934
2010
An Empirical Bayesian Interpretation and Generalization of NL-means
Raphan, Martin;
Simoncelli, Eero P.
Abstract
|
PDF
Title: An Empirical Bayesian Interpretation and Generalization of NL-means
Author(s): Raphan, Martin; Simoncelli, Eero P.
Abstract:
A number of recent algorithms in signal and image processing are based on the empirical distribution of localized patches. Here, we develop a nonparametric empirical Bayesian estimator for recovering an image corrupted by additive Gaussian noise, based on fitting the density over image patches with a local exponential model. The resulting solution is in the form of an adaptively weighted average of the observed patch with the mean of a set of similar patches, and thus both justifies and generalizes the recently proposed nonlocal-means (NL-means) method for image denoising. Unlike NL-means, our estimator includes a dependency on the size of the patch similarity neighborhood, and we show that this neighborhood size can be chosen in such a way that the estimator converges to the optimal Bayes least squares estimator as the amount of data grows. We demonstrate the increase in performance of our method compared to NL-means on a set of simulated examples.
-
Ph.D. Thesis
2010
Theoretical Foundations and Algorithms for Learning with Multiple Kernels
Rostamizadeh, Afshin
Abstract
|
PDF
Title: Theoretical Foundations and Algorithms for Learning with Multiple Kernels
Candidate: Rostamizadeh, Afshin
Advisor(s): Mohri, Mehryar
Abstract:
Kernel-based algorithms have been used with great success in a variety of machine learning applications. These include algorithms such as support vector machines for classification, kernel ridge regression, ranking algorithms, clustering algorithms, and virtually all popular dimensionality reduction algorithms, since they are special instances of kernel principal component analysis.
But, the choice of the kernel, which is crucial to the success of these algorithms, has been traditionally left entirely to the user. Rather than requesting the user to commit to a specific kernel, multiple kernel algorithms require the user only to specify a family of kernels. This family of kernels can be used by a learning algorithm to form a combined kernel and derive an accurate predictor. This is a problem that has attracted a lot of attention recently, both from the theoretical point of view and from the algorithmic, optimization, and application point of view.
This thesis presents a number of novel theoretical and algorithmic results for learning with multiple kernels.
It gives the first tight margin-based generalization bounds for learning kernels with Lp regularization. In particular, our margin bounds for L1 regularization are shown to have only a logarithmic dependency on the number of kernels, which is a significant improvement over all previous analyses. Our results also include stability-based guarantees for a class of regression algorithms. In all cases, these guarantees indicate the benefits of learning with a large number of kernels.
We also present a family of new two-stage algorithms for learning kernels based on a notion of alignment and give an extensive analysis of the properties of these algorithms. We show the existence of good predictors for the notion of alignment we define and give efficient algorithms for learning a maximum alignment kernel by showing that the problem can be reduced to a simple QP.
Finally, we also report the results of extensive experiments with our two-stage algorithms in classification and regression tasks, which show an improvement both over the uniform combination of kernels and over other state-of-the-art learning kernel methods for L1 and L2 regularization. These might constitute the first series of results for learning with multiple kernels that demonstrate a consistent improvement over a uniform combination of kernels.
-
Ph.D. Thesis
2010
Creating collections and evaluating viewpoints: Selection techniques for interface design
Secord, Adrian
Abstract
|
PDF
Title: Creating collections and evaluating viewpoints: Selection techniques for interface design
Candidate: Secord, Adrian
Advisor(s): Zorin, Denis
Abstract:
In computer graphics and user interface design, selection problems are those that require the user to select a collection consisting of a small number of items from a much larger library. This dissertation explores selection problems in two diverse domains: large personal multimedia collections, containing items such as personal photographs or songs, and camera positions for 3D objects, where each item is a different viewpoint observing an object. Multimedia collections have by discrete items with strong associated metadata, while camera positions form a continuous space but are weak in metadata. In either domain, the items to be selected have rich interconnections and dependencies, making it difficult to successfully apply simple techniques (such as ranking) to aid the user. Accordingly, we develop separate approaches for the two domains.
For personal multimedia collections, we leverage the semantic metadata associated with each item (such as song title, artist name, etc.) and provide the user with a simple query language to describe their desired collection. Our system automatically suggests a collection of items that conform to the userâs query. Since any query language has limited expressive power, and since users often create collections via exploration, we provide various refinement techniques that allow the user to expand, refine and explore their collection directly through examples.
For camera positioning, we do not have the advantage of having semantic metadata for each item, unlike in media collections. We instead create a proxy viewpoint goodness function which can be used to guide the solution of various selection problems involving camera viewpoints. This function is constructed from several different attributes of the viewpoint, such as how much surface area is visible, or how "curvy" the silhouette is. Since there are many possible viewpoint goodness functions, we conducted a large user study of viewpoint preference and use the results to evaluate thousands of different functions and find the best ones. While we suggest several goodness functions to the practitioner, our user study data and methodology can be used to evaluate any proposed goodness function; we hope it will be a useful tool for other researchers.
-
TR2010-924
2010
Henrique Andrade, Vibhore Kumar, and Kun-Lung Wu, A Universal Calculus for Stream Processing Languages
Soulé, Robert;
Hirzel, Martin; Grimm, Robert; Gedik, Buğra
Abstract
|
PDF
Title: Henrique Andrade, Vibhore Kumar, and Kun-Lung Wu, A Universal Calculus for Stream Processing Languages
Author(s): Soulé, Robert; Hirzel, Martin; Grimm, Robert; Gedik, Buğra
Abstract:
Stream processing applications such as algorithmic trading, MPEG processing, and web content analysis are ubiquitous and essential to business and entertainment. Language designers have developed numerous domain-specific languages that are both tailored to the needs of their applications, and optimized for performance on their particular target platforms. Unfortunately, the goals of generality and performance are frequently at odds, and prior work on the formal semantics of stream processing languages does not capture the details necessary for reasoning about implementations. This paper presents Brooklet, a core calculus for stream processing that allows us to reason about how to map languages to platforms and how to optimize stream programs. We translate from three representative languages, CQL, StreamIt, and Sawzall, to Brooklet, and show that the translations are correct. We formalize three popular and vital optimizations, data-parallel computation, operator fusion, and operator re-ordering, and show under which conditions they are correct. Language designers can use Brooklet to specify exactly how new features or languages behave. Language implementors can use Brooklet to show exactly under which circumstances new optimizations are correct. In ongoing work, we are developing an intermediate language for streaming that is based on Brooklet. We are implementing our intermediate language on System S, IBM's high-performance streaming middleware.
-
Ph.D. Thesis
2010
Analysis of Mass Spectrometry Data for Protein Identification In Complex Biological Mixtures
Spivak, Marina
Abstract
|
PDF
Title: Analysis of Mass Spectrometry Data for Protein Identification In Complex Biological Mixtures
Candidate: Spivak, Marina
Advisor(s): Greengard, Leslie
Abstract:
Mass spectrometry is a powerful technique in analytical chemistry that was originally designed to determine the composition of small molecules in terms of their constituent elements. In the last several decades, it has begun to be used for much more complex tasks, including the detailed analysis of the amino acid sequence that makes up an unknown protein and even the identification of multiple proteins present in a complex mixture. The latter problem is largely unsolved and the principal subject of this dissertation.
The fundamental difficulty in the analysis of mass spectrometry data is that of ill-posedness. There are multiple solutions consistent with the experimental data and the data is subject to significant amounts of noise. In this work, we have developed application-specific machine learning algorithms that (partially) overcome this ill-posedness. We make use of labeled examples of a single class of peptide fragments and of the unlabeled fragments detected by the instrument. This places the approach within the broader framework of semi-supervised learning.
Recently, there has been considerable interest in classification problems of this type, where the learning algorithm only has access to labeled examples of a single class and unlabeled data. The motivation for such problems is that in many applications, examples of one of the two classes are easy and inexpensive to obtain, whereas the acquisition of examples of a second class is difficult and labor-intensive. For example, in document classification, positive examples are documents that address specific subject, while unlabeled documents are abundant. In movie rating, the positive data are the movies chosen by clients, while the unlabeled data are all remaining movies in a collection. In medical imaging, positive (labeled) data correspond to images of tissue affected by a disease, while the remaining available images of the same tissue comprise the unlabeled data. Protein identification using mass spectrometry is another variant of such a general problem.
In this work, we propose application-specific machine learning algorithms to address this problem. The reliable identification of proteins from mixtures using mass spectrometry would provide an important tool in both biomedical research and clinical practice.
-
Ph.D. Thesis
2010
Matrix Approximation for Large-scale Learning
Talwalkar, Ameet
Abstract
|
PDF
Title: Matrix Approximation for Large-scale Learning
Candidate: Talwalkar, Ameet
Advisor(s): Mohri, Mehryar
Abstract:
Modern learning problems in computer vision, natural language processing, computational biology, and other areas are often based on large data sets of thousands to millions of training instances. However, several standard learning algorithms, such as kernel-based algorithms, e.g., Support Vector Machines, Kernel Ridge Regression, Kernel PCA, do not easily scale to such orders of magnitude. This thesis focuses on sampling-based matrix approximation techniques that help scale kernel-based algorithms to large-scale datasets. We address several fundamental theoretical and empirical questions including:
What approximation should be used? We discuss two common sampling-based methods, providing novel theoretical insights regarding their suitability for various applications and experimental results motivated by this theory. Our results show that one of these methods, the Nystrom method, is superior in the context of large-scale learning.
Do these approximations work in practice? We show the effectiveness of approximation techniques on a variety of problems. In the largest study to-date for manifold learning, we use the Nystrom method to extract low-dimensional structure from high-dimensional data to effectively cluster face images. We also report good empirical results for kernel ridge regression and kernel logistic regression.
How should we sample columns? A key aspect of sampling-based algorithms is the distribution according to which columns are sampled. We study both fixed and adaptive sampling schemes as well as a promising ensemble technique that can be easily parallelized and generates superior approximations, both in theory and in practice.
How well do these approximations work in theory? We provide theoretical analyses of the Nystrom method to understand when this technique should be used. We present guarantees on approximation accuracy based on various matrix properties and analyze the effect of matrix approximation on actual kernel-based algorithms.
This work has important consequences for the machine learning community since it extends to large-scale applications the benefits of kernel-based algorithms. The crucial aspect of this research, involving low-rank matrix approximation, is of independent interest within the field of numerical linear algebra.
-
TR2010-935
2010
Learning Image Decompositions with Hierarchical Sparse Coding
Zeiler, Matthew D.;
Fergus, Rob
Abstract
|
PDF
Title: Learning Image Decompositions with Hierarchical Sparse Coding
Author(s): Zeiler, Matthew D.; Fergus, Rob
Abstract:
We present a hierarchical model that learns image decompositions via alternating layers of convolutional sparse coding and max pooling. When trained on natural images, the layers of our model capture image information in a variety of forms: low-level edges, mid-level edge junctions, high-level object parts and complete objects. To build our model we rely on a novel inference scheme that ensures each layer reconstructs the input, rather than just the output of the layer directly beneath, as is common with existing hierarchical approaches. This scheme makes it possible to robustly learn multiple layers of representation and we show a model with 4 layers, trained on images from the Caltech-101 dataset. We use our model to produce image decompositions that, when used as input to standard classification schemes, give a significant performance gain over low-level edge features and yield an overall performance competitive with leading approaches.
-
Ph.D. Thesis
2009
Factor Graphs for Relational Regression
Chopra, Sumit
Abstract
|
PDF
Title: Factor Graphs for Relational Regression
Candidate: Chopra, Sumit
Advisor(s): LeCun, Yann
Abstract:
Inherent in many interesting regression problems is a rich underlying inter-sample "Relational Structure". In these problems, the samples may be related to each other in ways such that the unknown variables associated with any sample not only depends on its individual attributes, but also depends on the variables associated with related samples. One such problem, whose importance is further emphasized by the present economic crises, is understanding real estate prices. The price of a house clearly depends on its individual attributes, such as, the number of bedrooms. However, the price also depends on the neighborhood in which the house lies and on the time period in which it was sold. This effect of neighborhood and time on the price is not directly measurable. It is merely reflected in the prices of other houses in the vicinity that were sold around the same time period. Uncovering these spatio-temporal dependencies can certainly help better understand house prices, while at the same time improving prediction accuracy.
Problems of this nature fall in the domain of "Statistical Relational Learning". However the drawback of most models proposed so far is that they cater only to classification problems. To this end, we propose "relational factor graph" models for doing regression in relational data. A single factor graph is used to capture, one, dependencies among individual variables of sample, and two, dependencies among variables associated with multiple samples. The proposed models are capable of capturing hidden inter-sample dependencies via latent variables, and also permits non-linear log-likelihood functions in parameter space, thereby allowing considerably more complex architectures. Efficient inference and learning algorithms for relational factor graphs are proposed. The models are applied to predicting the prices of real estate properties and for constructing house price indices. The relational aspect of the model accounts for the hidden spatio-temporal influences on the price of every house. Experiments show that one can achieve considerably superior performance by identifying and using the underlying spatio-temporal structure associated with the problem. To the best of our knowledge this is the first work in the direction of relational regression and is also the first work in constructing house price indices by simultaneously accounting for the spatio-temporal effects on house prices using large-scale industry standard data set.
-
TR2008-919
2009
Hybrid Domain Decomposition Algorithms for Compressible and Almost Incompressible Elasticity
Dohrmann, Clark R.;
Widlund, Olof B.
Abstract
|
PDF
Title: Hybrid Domain Decomposition Algorithms for Compressible and Almost Incompressible Elasticity
Author(s): Dohrmann, Clark R.; Widlund, Olof B.
Abstract:
Overlapping Schwarz methods are considered for mixed finite element approximations of linear elasticity, with discontinuous pressure spaces, as well as for compressible elasticity approximated by standard conforming finite elements. The coarse components of the preconditioners are based on %spaces, with a fixed number of degrees of freedom per subdomain, spaces, with a number of degrees of freedom per subdomain which is uniformly bounded, and which are similar to those previously developed for scalar elliptic problems and domain decomposition methods of iterative substructuring type, i.e., methods based on non-overlapping decompositions of the domain. The local components of the new preconditioners are based on solvers on a set of overlapping subdomains.
-
Ph.D. Thesis
2009
Numerical Estimation of the Second Largest Eigenvalue of a Reversible Markov Transition Matrix
Gade, Kranthi
Abstract
|
PDF
Title: Numerical Estimation of the Second Largest Eigenvalue of a Reversible Markov Transition Matrix
Candidate: Gade, Kranthi
Advisor(s): Goodman, Jonathan
Abstract:
We discuss the problem of finding the second largest eigenvalue of an operator that defines a reversible Markov chain. The second largest eigenvalue governs the rate at which the statistics of the Markov chain converge to equilibrium. Scientific applications include understanding the very slow dynamics of some models of dynamic glass. Applications in computing include estimating the rate of convergence of Markov chain Monte Carlo algorithms.
Most practical Markov chains have state spaces so large that direct or even iterative methods from linear algebra are inapplicable. The size of the state space, which is the dimension of the eigenvalue problem, grows exponentially with the system size. This makes it impossible to store a vector (for sparse methods), let alone a matrix (for dense methods). Instead, we seek a method that uses only time correlation from samples produced from the Markov chain itself.
In this thesis, we propose a novel Krylov subspace type method to estimate the second eigenvalue from the simulation data of the Markov chain using test functions which are known to have good overlap with the slowest mode. This method starts with the naive Rayleigh quotient estimate of the test function and refines it to obtain an improved estimate of the second eigenvalue. We apply the method to a few model problems and the estimate compares very favorably with the known answer. We also apply the estimator to some Markov chains occuring in practice, most notably in the study of glasses. We show experimentally that our estimator is more accurate and stable for these problems compared to the existing methods.
-
Ph.D. Thesis
2009
2D-Centric Interfaces and Algorithms for 3D Modeling
Gingold, Yotam
Abstract
|
PDF
Title: 2D-Centric Interfaces and Algorithms for 3D Modeling
Candidate: Gingold, Yotam
Advisor(s): Zorin, Denis
Abstract:
The creation of 3D models is a fundamental task in computer graphics. The task is required by professional artists working on movies, television, and games, and desired by casual users who wish to make their own models for use in virtual worlds or as a hobby.
In this thesis, we consider approaches to creating and editing 3D models that minimize the user's thinking in 3D. In particular, our approaches do not require the user to manipulate 3D positions in space or mentally invert complex 3D-to-2D mappings. We present interfaces and algorithms for the creation of 3D surfaces, for texturing, and for adding small-to-medium scale geometric detail.
First, we present a novel approach for texture placement and editing based on direct manipulation of textures on the surface. Compared to conventional tools for surface texturing, our system combines UV-coordinate specification and texture editing into one seamless process, reducing the need for careful initial design of parameterization and providing a natural interface for working with textures directly on 3D surfaces.
Second, we present a system for free-form surface modeling that allows a user to modify a shape by changing its rendered, shaded image using stroke-based drawing tools. A new shape, whose rendered image closely approximates user input, is c omputed using an efficient and stable surface optimization procedure. We demonstrate how several types of free-form surface edits which may be difficult to cast in terms of standard deformation approaches can be easily performed using our system.
Third, we present a single-view 2D interface for 3D modeling based on the idea of placing 2D primitives and annotations on an existing, pre-made sketch or image. Our interface frees users to create 2D sketches from arbitrary angles using their preferred tool---including pencil and paper---which they then "describe" using our tool to create a 3D model. Our primitives are manipulated with persistent, dynamic handles, and our annotations take the form of markings commonly used in geometry textbooks.
-
Ph.D. Thesis
2009
Proximity problems for point sets residing in spaces with low doubling dimension
Gottlieb, Lee-Ad
Abstract
|
PDF
Title: Proximity problems for point sets residing in spaces with low doubling dimension
Candidate: Gottlieb, Lee-Ad
Advisor(s): Cole, Richard
Abstract:
In this thesis we consider proximity problems on point sets. Proximity problems arise in all fields of computer science, with broad application to computation geometry, machine learning, computational biology, data mining and the like. In particular, we will consider the problems of approximate nearest neighbor search, and dynamic maintenance of a spanner for a point set.
It has been conjectured that all algorithms for these two problems suffer from the "curse of dimensionality," which means that their run time grow exponentially with the dimension of the point set. To avoid this undesirable growth, we consider point sets that occupy a doubling dimension lambda. We first present a dynamic data structure that uses linear space and supports a (1+e)-approximate nearest neighbor search of the point set. We then extend this algorithm to allow the dynamic maintenance of a low degree (1+e)-spanner for the point set. The query and update time of these structures are exponential in lambda (as opposed to exponential in the dimension); when lambda is small, this provides a significant spead-up over known algorithms, and when lambda is constant then these run times are optimal up to a constant. Even when no assumptions are made on lambda, the query and update times of the neighest neighbor search structure match the best known run times for approximate nearest neighbor search (up to a constant multiple in lambda). Further, the stretch of the spanner is optimal, and its update times exceed all previously known algorithms.
-
Ph.D. Thesis
2009
Creativity Support for Computational Literature
Howe, Daniel
Abstract
|
PDF
Title: Creativity Support for Computational Literature
Candidate: Howe, Daniel
Advisor(s): Perlin, Ken
Abstract:
The creativity support community has a long history of providing valuable tools to artists and designers. Similarly, creative digital media practice has proven a valuable pedagogical strategy for teaching core computational ideas. Neither strain of research has focused on the domain of literary art however, instead targeting visual, and aural media almost exclusively.
To address this situation, this thesis presents a software toolkit created specifically to support creativity in computational literature. Two primary hypotheses direct the bulk of the research presented: first, that it is possible to implement effective creativity support tools for literary art given current resource constraints; and second, that such tools, in addition to facilitating new forms of literary creativity, will provide unique opportunities for computer science education.
Designed both for practicing artists and for pedagogy, the research presented directly addresses impediments to participation in the field for a diverse range of users and provides an end-to-end solution for courses attempting to engage the creative faculties of computer science students, and to introduce a wider demographic--from writers, to digital artists, to media and literary theorists --to procedural literacy and computational thinking.
The tools and strategies presented have been implemented, deployed, and iteratively refined in a real-world contexts over the past three years. In addition to their use in large-scale projects by contemporary artists, they have provided effective support for multiple iterations of 'Programming for Digital Art & Literature', a successful inter-disciplinary computer science course taught by the author.
Taken together, this thesis provides a novel set of tools for a new domain, and demonstrates their real-world efficacy in providing both creativity and pedagogical support for a diverse and emerging population of users.
-
TR2009-920
2009
A numerical method for simulating the dynamics of 3D axisymmetric vesicles suspended in viscous flows
K. Veerapaneni, Shravan;
Gueyerer, Denis; Biros, George; Zorin, Denis
Abstract
|
PDF
Title: A numerical method for simulating the dynamics of 3D axisymmetric vesicles suspended in viscous flows
Author(s): K. Veerapaneni, Shravan; Gueyerer, Denis; Biros, George; Zorin, Denis
Abstract:
We extend "A boundary integral method for simulating the dynamics of inextensible vesicles suspended in a viscous fluid in 2D", Veerapaneni et al. Journal of Computational Physics, 228(7), 2009 to the case of three dimensional axisymmetric vesicles of spherical or toroidal topology immersed in viscous flows. Although the main components of the algorithm are similar in spirit to the 2D case.spectral approximation in space, semi-implicit time-stepping scheme.the main differences are that the bending and viscous force require new analysis, the linearization for the semi-implicit schemes must be rederived, a fully implicit scheme must be used for the toroidal topology to eliminate a CFL-type restriction, and a novel numerical scheme for the evaluation of the 3D Stokes single-layer potential on an axisymmetric surface is necessary to speed up the calculations. By introducing these novel components, we obtain a time-scheme that experimentally is unconditionally stable, has low cost per time step, and is third-order accurate in time. We present numerical results to analyze the cost and convergence rates of the scheme. To verify the solver, we compare it to a constrained variational approach to compute equilibrium shapes that does not involve interactions with a viscous fluid. To illustrate the applicability of method, we consider a few vesicle-flow interaction problems: the sedimentation of a vesicle, interactions of one and three vesicles with a background Poiseuille flow.
-
TR2009-925
2009
A Hybrid Domain Decomposition Method and its Applications to Contact Problems
Lee, Jungho
Abstract
|
PDF
Title: A Hybrid Domain Decomposition Method and its Applications to Contact Problems
Author(s): Lee, Jungho
Abstract:
Our goal is to solve nonlinear contact problems. We consider bodies in contact with each other divided into subdomains, which in turn are unions of elements. The contact surface between the bodies is unknown a priori, and we have a nonpen-etration condition between the bodies, which is essentially an inequality constraint. We choose to use an active set method to solve such problems, which has both outer iterations in which the active set is updated, and inner iterations in which a (linear) minimization problem is solved on the current active face. In the first part of this dissertation, we review the basics of domain decomposition methods. In the second part, we consider how to solve the inner minimization problems. Using an approach based purely on FETI algorithms with only Lagrange multipliers as unknowns, as has been developed by the engineering community, does not lead to a scalable algorithm with respect to the number of subdomains in each body. We prove that such an algorithm has a condition number estimate which depends linearly on the number of subdomains across a body; numerical experiments suggest that this is the best possible bound. We also consider a new method based on the saddle point formulation of the FETI methods with both displacement vectors and Lagrange multipliers as unknowns. The resulting system is solved with a block-diagonal preconditioner which combines the one-level FETIand the BDDC methods. This approach allows the use of inexact solvers. We show that this new method is scalable with respect to the number of subdomains, and that its convergence rate depends only logarithmically on the number of degrees of freedom of the subdomains and bodies. In the last part of this dissertation, a model contact problem is solved by two approaches. The first one is a nonlinear algorithm which combines an active set method and the new method of Chapter 4. We also present a novel way of finding an initial active set. The second one uses the SMALBE algorithm, developed by Dostal et al. We show that the former approach has advantages over the latter.
-
Ph.D. Thesis
2009
Efficient Systems Biology Algorithms for Biological Networks over Multiple Time-Scales: From Evolutionary to Regulatory Time
Mitrofanova, Antonina
Abstract
|
PDF
Title: Efficient Systems Biology Algorithms for Biological Networks over Multiple Time-Scales: From Evolutionary to Regulatory Time
Candidate: Mitrofanova, Antonina
Advisor(s): Mishra, Bud
Abstract:
Recently, Computational Biology has emerged as one of the most exciting areas of computer science research, not only because of its immediate impact on many biomedical applications, (e.g., personalized medicine, drug and vaccine discovery, tools for diagnostics and therapeutic interventions, etc.), but also because it raises many new and interesting combinatorial and algorithmic questions, in the process. In this thesis, we focus on robust and efficient algorithms to analyze biological networks, primarily targeting protein networks, possibly the most fascinating networks in computational biology in terms of their structure, evolution and complexity, as well as because of their role in various genetic and metabolic diseases.
Classically, protein networks have been studied statically, i.e., without taking into account time-dependent metamorphic changes in network topology and functionality. In this work, we introduce new analysis techniques that view protein networks as being dynamic in nature, evolving over time, and diverse in regulatory patterns at various stages of the system development. Our analysis is capable of dealing with multiple time-scales: ranging from the slowest time-scale corresponding to evolutionary time between species, speeding up to inter-species pathway evolution time, and finally, moving to the other extreme at the cellular developmental time-scale.
We also provide a new method to overcome limitations imposed by corrupting effects of experimental noise (e.g., high false positive and false negative rates) in Yeast Two-Hybrid (Y2H) networks, which often provide primary data for protein complexes. Our new combinatorial algorithm measures connectivity between proteins in Y2H network not by edges but by edge-disjoint paths, which reflects pathway evolution better within single specie network. This algorithm has been shown to be robust against increasing false positives and false negatives, as estimated using variation of information and separation measures.
In addition, we have devised a new way to incorporate evolutionary information in order to significantly improve classification of proteins, especially those isolated in their own networks or surrounded by poorly characterized neighbors. In our method, the networks of two (or more) species are joined by edges of high sequence similarity so that protein-homologs of different species can exchange information and acquire new and improved functional associations.
Finally, we have integrated many of these techniques into one tool to create a novel analysis of malaria parasite P. falciparum's life-cycle at the scale of reaction-time, single cell level, and encompassing its entire inter-erythrocytic developmental cycle (IDC). Our approach allows connecting time-course gene expression profiles of consecutive IDC stages in order to assign functions to un-annotated Malaria proteins and predict potential targets for vaccine and drug development.
-
Ph.D. Thesis
2009
Detecting, modeling and rendering complex configurations of curvilinear features
Parilov, Evgueni
Abstract
|
PDF
Title: Detecting, modeling and rendering complex configurations of curvilinear features
Candidate: Parilov, Evgueni
Advisor(s): Zorin, Denis
Abstract:
Curvilinear features allow one to represent a variety of real world regular patterns like honeycomb tiling as well as very complicated random patterns like networks of furrows on the surface of the human skin. We have developed a set of methods and new data representations for solving key problems related to curvilinear features, which include robust detection of intricate networks of curvilinear features from digital images, GPU-based sharp rendering of fields with curvilinear features, and a parametric synthesis approach to generate systems of curvilinear features with desirable local configurations and global control.
The existing edge-detection techniques may underperform in the presence of noise, usually do not link the detected edge points into chains, often fail on complex structures, heavily depend on initial guess, and assume significant manual phase. We have developed a technique based on active contours, or snakes, which avoids manual initial positioning of the snakes and can detect large networks of curves with complex junctions without user guidance.
The standard bilinear interpolation of piecewise continuous fields results in unwanted smoothing along the curvilinear discontinuities. Spatially varying features can be best represented as a function of the distance to the discontinuity curves and its gradient. We have developed a real-time, GPU-based method for unsigned distance function field and its gradient field interpolation which preserves discontinuity feature curves, represented by quadratic Bezier curves, with minimal restriction on their topology.
Detail features are very important visual clues which make computer-generated imagery look less artificial. Instead of using sample-based synthesis technique which lacks user control on features usually producing gaps in features or breaking feature coherency, we have explored an alternative approach of generating features using random fibre processes. We have developed a Gibbs-type random process of linear fibres based on local fibre interactions. It allows generating non-stationary curvilinear networks with some degree of regularity, and provides an intuitive set of parameters which directly defines fibre local configurations and global pattern of fibres.
For random systems of linear fibres which approximately form two orthogonal dominant orientation fields, we have adapted a streamline placement algorithm which converts such systems into overlapping random sets of coherent smooth curves.
-
Ph.D. Thesis
2009
Unsupervised Learning of Feature Hierarchies
Ranzato, Marc'Aurelio
Abstract
|
PDF
Title: Unsupervised Learning of Feature Hierarchies
Candidate: Ranzato, Marc'Aurelio
Advisor(s): LeCun, Yann
Abstract:
The applicability of machine learning methods is often limited by the amount of available labeled data, and by the ability (or inability) of the designer to produce good internal representations and good similarity measures for the input data vectors.
The aim of this thesis is to alleviate these two limitations by proposing algorithms to learn good internal representations, and invariant feature hierarchies from unlabeled data. These methods go beyond traditional supervised learning algorithms, and rely on unsupervised, and semi-supervised learning.
In particular, this work focuses on ''deep learning'' methods, a set of techniques and principles to train hierarchical models. Hierarchical models produce feature hierarchies that can capture complex non-linear dependencies among the observed data variables in a concise and efficient manner. After training, these models can be employed in real-time systems because they compute the representation by a very fast forward propagation of the input through a sequence of non-linear transformations.
When the paucity of labeled data does not allow the use of traditional supervised algorithms, each layer of the hierarchy can be trained in sequence starting at the bottom by using unsupervised or semi-supervised algorithms. Once each layer has been trained, the whole system can be fine-tuned in an end-to-end fashion. We propose several unsupervised algorithms that can be used as building block to train such feature hierarchies. We investigate algorithms that produce sparse overcomplete representations and features that are invariant to known and learned transformations. These algorithms are designed using the Energy-Based Model framework and gradient-based optimization techniques that scale well on large datasets. The principle underlying these algorithms is to learn representations that are at the same time sparse, able to reconstruct the observation, and directly predictable by some learned mapping that can be used for fast inference in test time.
With the general principles at the foundation of these algorithms, we validate these models on a variety of tasks, from visual object recognition to text document classification and retrieval.
-
TR2009-923
2009
Learning least squares estimators without assumed priors or supervision
Raphan, Martin;
Simoncelli, Eero P.
Abstract
|
PDF
Title: Learning least squares estimators without assumed priors or supervision
Author(s): Raphan, Martin; Simoncelli, Eero P.
Abstract:
The two standard methods of obtaining a least-squares optimal estimator are (1) Bayesian estimation, in which one assumes a prior distribution on the true values and combines this with a model of the measurement process to obtain an optimal estimator, and (2) supervised regression, in which one optimizes a parametric estimator over a training set containing pairs of corrupted measurements and their associated true values. But many real-world systems do not have access to either supervised training examples or a prior model. Here, we study the problem of obtaining an optimal estimator given a measurement process with known statistics, and a set of corrupted measurements of random values drawn from an unknown prior. We develop a general form of nonparametric empirical Bayesian estimator that is written as a direct function of the measurement density, with no explicit reference to the prior. We study the observation conditions under which such "prior-free" estimators may be obtained, and we derive specific forms for a variety of different corruption processes. Each of these prior-free estimators may also be used to express the mean squared estimation error as an expectation over the measurement density, thus generalizing Stein's unbiased risk estimator (SURE) which provides such an expression for the additive Gaussian noise case. Minimizing this expression over measurement samples provides an "unsupervised regression" method of learning an optimal estimator from noisy measurements in the absence of clean training data. We show that combining a prior-free estimator with its corresponding unsupervised regression form produces a generalization of the "score matching" procedure for parametric density estimation, and we develop an incremental form of learning for estimators that are written as a linear combination of nonlinear kernel functions. Finally, we show through numerical simulations that the convergence of these estimators can be comparable to their supervised or Bayesian counterparts.
-
M.S. Thesis
2009
Plinkr: an Application of Semantic Search
Scott, John
Abstract
|
PDF
Title: Plinkr: an Application of Semantic Search
Candidate: Scott, John
Advisor(s): Shasha, Dennis
Abstract:
Plinkr extends and enriches traditional keyword search with semantic search technology. Specifically, Plinkr facilitates the process of discovering the intersection of information between two subjects. This intersection represents what the subjects have in common and thus effectively captures the relationships between them. This is accomplished by semantically tagging and scoring entities that are contained within various keyword searches. The most relevant entities are thus abstracted and presented as metadata which can be explored to highlight the most pertinent content.
-
Ph.D. Thesis
2009
Search Problems for Speech and Audio Sequences
Weinstein, Eugene
Abstract
|
PDF
Title: Search Problems for Speech and Audio Sequences
Candidate: Weinstein, Eugene
Advisor(s): Mohri, Mehryar
Abstract:
The modern proliferation of very large audio, video, and biological databases has created a need for the design of effective methods for indexing and searching highly variable or uncertain data. Classical search and indexing algorithms deal with clean or perfect input sequences. However, an index created from speech transcriptions is marked with errors and uncertainties stemming from the use of imperfect statistical models in the speech recognition process. Similarly, automatic transcription of music, such as assigning a sequence of notes to represent a stream of music audio, is prone to errors. How can we generalize search and indexing algorithms to deal with such uncertain inputs?
This thesis presents several novel algorithms, analyses, and general techniques and tools for effective indexing and search that not only tolerate but actually exploit this uncertainty. In particular, it develops an algorithmic foundation for music identification, or content-based music search; presents novel automata-theoretic results applicable generally to a variety of search and indexing tasks; and describes new algorithms for topic segmentation, or automatic splitting of speech streams into topic-coherent segments.
We devise a new technique for music identification in which each song is represented by a distinct sequence of music sounds, called "music phonemes." In our approach, we learn the set of music phonemes, as well as a unique sequence of music phonemes characterizing each song, from training data using an unsupervised algorithm. We also propose a novel application of factor automata to create a compact mapping of music phoneme sequences to songs. Using these techniques, we construct an efficient and robust music identification system for a large database of songs.
We further design new algorithms for compact indexing of uncertain inputs based on suffix and factor automata and give novel theoretical guarantees for their space requirements. Suffix automata and factor automata represent the set of all suffixes or substrings of a set of strings, and are used in numerous indexing and search tasks, including the music identification system just mentioned. We show that the suffix automaton or factor automaton of a set of strings U has at most 2Q-2 states, where Q is the number of nodes of a prefix-tree representing the strings in U, a significant improvement over previous work. We also describe a matching new linear-time algorithm for constructing the suffix automaton S or factor automaton F of U in time O(|S|).
We also define a new quality measure for topic segmentation systems and design a discriminative topic segmentation algorithm for speech inputs, thus facilitating effective indexation of spoken audio collections. The new quality measure improves on previously used criteria and is correlated with human judgment of topic-coherence. Our segmentation algorithm uses a novel general topical similarity score based on word co-occurrence statistics. This new algorithm outperforms previous methods in experiments over speech and text streams. We further demonstrate that the performance of segmentation algorithms can be improved by using a lattice of competing hypotheses over the speech stream rather than just the one-best hypothesis as input.
-
TR2008-915
2009
Body Signature Recognition
Williams, George;
Bregler, Christoph; Hackney, Peggy; Rosenthal, Sally; McDowall, Ian; Smolskiy, Kirill
Abstract
|
PDF
Title: Body Signature Recognition
Author(s): Williams, George; Bregler, Christoph; Hackney, Peggy; Rosenthal, Sally; McDowall, Ian; Smolskiy, Kirill
Abstract:
This paper describes a new visual representation of motion that is used to learn and classify body language - what we call .body signatures. - of people while they are talking. We applied this technique to several hours of internet videos and television broadcasts that include US politicians and leaders from Germany, France, Iran, Russia, Pakistan, and India, and public figures such as the Pope, as well as numerous talk show hosts and comedians. Dependent on the complexity of the task, we show up to 80% recognition performance and clustering into broader body language categories.
-
Ph.D. Thesis
2009
Using Application-Domain Knowledge in the Runtime Support of Multi-Experiment Computational Studies
Yau, Siu-Man
Abstract
|
PDF
Title: Using Application-Domain Knowledge in the Runtime Support of Multi-Experiment Computational Studies
Candidate: Yau, Siu-Man
Advisor(s): Karamcheti, Vijay; Zorin, Denis
Abstract:
Multi-Experiment Studies (MESs) is a type of computational study in which the same simulation software is executed multiple times, and the result of all executions need to be aggregated to obtain useful insight. As computational simulation experiments become increasingly accepted as part of the scientific process, the use of MESs is becoming more wide-spread among scientists and engineers.
MESs present several challenging requirements on the computing system. First, many MESs need constant user monitoring and feedback, requiring simultaneous steering of multiple executions of the simulation code. Second, MESs can comprise of many executions of long-running simulations; the sheer volume of computation can make them prohibitively long to run.
Parallel architecture offer an attractive computing platform for MESs. Low-cost, small-scale desktops employing multi-core chips allow wide-spread dedicated local access to parallel computation power, offering more research groups an opportunity to achieve interactive MESs. Massively-parallel, high-performance computing clusters can afford a level of parallelism never seen before, and present an opportunity to address the problem of computationally intensive MESs.
However, in order to fully leverage the benefits of parallel architectures, the traditional parallel systems' view has to be augmented. Existing parallel computing systems often treat each execution of the software as a black box, and are prevented from viewing an entire computational study as a single entity that must be optimized for.
This dissertation investigates how a parallel system can view MESs as an end-to-end system and leverage the application-specific properties of MESs to address its requirements. In particular, the system can 1) adapt its scheduling decisions to the overall goal of an MES to reduce the needed computation, 2) simultaneously aggregate results from, and disseminate user actions to, multiple executions of the software to enable simultaneous steering, 3) store reusable information across executions of the simulation software to reduce individual run-time, and 4) adapt its resource allocation policies to the MES's properties to improve resource utilization.
Using a test bed system called SimX and four example MESs across different disciplines, this dissertation shows that the application-aware MES-level approach can achieve multi-fold to multiple orders-of-magnitude improvements over the traditional simulation-level approach.
-
Ph.D. Thesis
2009
Ensuring Correctness of Compiled Code
Zaks, Ganna
Abstract
|
PDF
Title: Ensuring Correctness of Compiled Code
Candidate: Zaks, Ganna
Advisor(s): Pnueli, Amir
Abstract:
Traditionally, the verification effort is applied to the abstract algorithmic descriptions of the underlining software. However, even well understood protocols such as Peterson's protocol for mutual exclusion, whose algorithmic description takes only half a page, have published implementations that are erroneous. Furthermore, the semantics of the implementations can be altered by optimizing compilers, which are very large applications and, consequently, are bound to have bugs. Thus, it is highly desirable to ensure the correctness of the compiled code especially in safety critical and high-assurance software. This dissertation describes two alternative approaches that bring us closer to solving the problem.
First, we present CoVaC - a deductive framework for proving program equivalence and its application to automatic verification of transformations performed by optimizing compilers. To leverage the existing program analysis techniques, we reduce the equivalence checking problem to analysis of one system - a cross-product of the two input programs. We show how the approach can be effectively used for checking equivalence of single-threaded programs that are structurally similar. Unlike the existing frameworks, our approach accommodates absence of compiler annotations and handles most of the classical intraprocedural optimizations such as constant folding, reassociation, common subexpression elimination, code motion, dead code elimination, branch optimizations, and others. In addition, we have developed rules for translation validation of interprocedural optimizations, which can be applied when compiler annotations are available.
The second contribution is the pancam framework for verifying multi-threaded C programs. Pancam first compiles a multithreaded C program into optimized bytecode format. The framework relies on Spin, an existing explicit state model checker, to orchestrate the program's state space search. However, the program transitions and states are computed by the pancam bytecode interpreter. A feature of our approach is that not only pancam checks the actual implementation, but it can also check the code after compiler optimizations. Pancam addresses the state space explosion problem by allowing users to define data abstraction functions and to constrain the number of allowed context switches. We also describe a partial order reduction method that reduces context switches using dynamic knowledge computed on-the-fly, while being sound for both safety and liveness properties.
-
TR2007-908
2008
General Algorithms for Testing the Ambiguity of Finite Automata
Allauzen, Cyril;
Mohri, Mehryar; Rastogi, Ashish
Abstract
|
PDF
Title: General Algorithms for Testing the Ambiguity of Finite Automata
Author(s): Allauzen, Cyril; Mohri, Mehryar; Rastogi, Ashish
Abstract:
This paper presents efficient algorithms for testing the finite, polynomial, and exponential ambiguity of finite automata with \(\epsilon\)-transitions. It gives an algorithm for testing the exponential ambiguity of an automaton \(A\) in time \(O(|A|_E2)\), and finite or polynomial ambiguity in time \(O(|A|_E3)\). These complexities significantly improve over the previous best complexities given for the same problem. Furthermore, the algorithms presented are simple and are based on a general algorithm for the composition or intersection of automata. We also give an algorithm to determine the degree of polynomial ambiguity of a finite automaton \(A\) that is polynomially ambiguous in time \(O(|A|_E3)\). Finally, we present an application of our algorithms to an approximate computation of the entropy of a probabilistic automaton.
-
TR2008-913
2008
Competitive Hybridization Model
Cherepinsky, Vera;
Hashmi, Ghazala; Seul, Michael; Mishra, Bud
Abstract
|
PDF
Title: Competitive Hybridization Model
Author(s): Cherepinsky, Vera; Hashmi, Ghazala; Seul, Michael; Mishra, Bud
Abstract:
Microarray technology, in its simplest form, allows one to gather abundance data for target DNA molecules, associated with genomes or gene-expressions, and relies on hybridizing the target to many short probe oligonucleotides arrayed on a surface. While for such multiplexed reactions conditions are optimized to make the most of each individual probe-target interaction, subsequent analysis of these experiments is based on the implicit assumption that a given experiment gives the same result regardless of whether it was conducted in isolation or in parallel with many others. It has been discussed in the literature that this assumption is frequently false, and its validity depends on the types of probes and their interactions with each other. We present a detailed physical model of hybridization as a means of understanding probe interactions in a multiplexed reaction. The model is formulated as a system of ordinary di.erential equations (ODE.s) describing kinetic mass action and conservation-of-mass equations completing the system.
We examine pair-wise probe interactions in detail and present a model of .competition. between the probes for the target.especially, when target is in short supply. These e.ects are shown to be predictable from the a.nity constants for each of the four probe sequences involved, namely, the match and mismatch for both probes. These a.nity constants are calculated from the thermodynamic parameters such as the free energy of hybridization, which are in turn computed according to the nearest neighbor (NN) model for each probe and target sequence.
Simulations based on the competitive hybridization model explain the observed variability in the signal of a given probe when measured in parallel with di.erent groupings of other probes or individually. The results of the simulations are used for experiment design and pooling strategies, based on which probes have been shown to have a strong e.ect on each other.s signal in the in silico experiment. These results are aimed at better design of multiplexed reactions on arrays used in genotyping (e.g., HLA typing, SNP or CNV detection, etc.) and mutation analysis (e.g., cystic .brosis, cancer, autism, etc.).
-
M.S. Thesis
2008
Friendshare: A decentralized, consistent storage repository for collaborative file sharing
Chiang, Frank
Abstract
|
PDF
Title: Friendshare: A decentralized, consistent storage repository for collaborative file sharing
Candidate: Chiang, Frank
Advisor(s): Li, Jinyang
Abstract:
Data sharing has become more and more collaborative with the rise of Web 2.0, where multiple writers jointly write and organize the content in a repository. Current solutions use a centralized entity, such as Wikipedia or Google Groups, to serve the data. However, centralized solutions may be undesirable due to privacy concerns and censorship, which are problems that can be alleviated by switching to decentralized solutions.
The challenge of building a decentralized collaborative repository is achieving high data availability, durability, and consistency. Attaining these goals is difficult because peer nodes have limited bandwidth and storage space, low availability, and the repository has high membership churn.
This thesis presents Friendshare, a decentralized multiple-writer data repository. Separating the metadata from the data allows for efficient metadata replication across privileged admin nodes, thus increasing availability and durability. The primary commit scheme, where a primary node is responsible for determining the total order of writes in the repository, is employed to ensure eventual consistency. If the primary leaves the system unexpectedly, the remaining admin nodes run Paxos, a consensus protocol, to elect a new primary.
The Paxos protocol requires high node availability in order to be run efficiently, a criteria that is rarely met in typical peer-to-peer networks. To rectify this problem, we offer two optimizations to improve Paxos performance in low availability environments.
Friendshare has been implemented and deployed to gather real-world statistics. To offer theoretical predictions, we built a simulator to demonstrate the performance and service availability of Friendshare at various node online percentages. In addition, we show the performance improvements of our Paxos optimizations in comparison with the basic Paxos protocol.
-
TR2007-906
2008
Factor Graphs for Relational Regression
Chopra, Sumit;
Thampy, Trivikaraman; Leahy, John; Caplin, Andrew; LeCun, Yann
Abstract
|
PDF
Title: Factor Graphs for Relational Regression
Author(s): Chopra, Sumit; Thampy, Trivikaraman; Leahy, John; Caplin, Andrew; LeCun, Yann
Abstract:
Traditional methods for supervised learning involve treating the input data as a set of independent, identically distributed samples. However, in many situations, the samples are related in such a way that variables associated with one sample depend on other samples. We present a new form of relational graphical model that, in addition to capturing the dependence of the output on sample specific features, can also capture hidden relationships among samples through a non-parametric latent manifold. Learning in the proposed graphical model involves simultaneously learning the non-parametric latent manifold along with a non-relational parametric model. Efficient inference algorithms are introduced to accomplish this task. The method is applied to the prediction of house prices. A non-relational model predicts an ``intrinsic" price of the house which depends only on its individual characteristics, and a relational model estimates a hidden surface of ``desirability'' coefficients which links the price of a house to that of similar houses in the neighborhood.
-
Ph.D. Thesis
2008
Verification of Transactional Memories and Recursive Programs
Cohen, Ariel
Abstract
|
PDF
Title: Verification of Transactional Memories and Recursive Programs
Candidate: Cohen, Ariel
Advisor(s): Pnueli, Amir
Abstract:
Transactional memory is a programming abstraction intended to simplify the synchronization of conflicting concurrent memory accesses without the difficulties associated with locks. In the first part of this thesis we provide a framework and tools that allow to formally verify that a transactional memory implementation satisfies its specification. First we show how to specify transactional memory in terms of admissible interchanges of transaction operations, and give proof rules for showing that an implementation satisfies its specification. We illustrate how to verify correctness, first using a model checker for bounded instantiations, and subsequently by using a theorem prover, thus eliminating all bounds. We provide a mechanical proof of the soundness of the verification method, as well as mechanical proofs for several implementations from the literature, including one that supports non-transactional memory accesses.
Procedural programs with unbounded recursion present a challenge to symbolic model-checkers since they ostensibly require the checker to model an unbounded call stack. In the second part of this thesis we present a method for model-checking safety and liveness properties over procedural programs. Our method performs by first augmenting a concrete procedural program with a well founded ranking function, and then abstracting the Procedural programs with unbounded recursion present a challenge to symbolic model-checkers since they ostensibly require the checker to model an unbounded call stack. In the second part of this thesis we present a method for model-checking safety and liveness properties over procedural programs. Our method performs by first augmenting a concrete procedural program with a well founded ranking function, and then abstracting the augmented program by a finitary state abstraction. Using procedure summarization the procedural abstract program is then reduced to a finite-state system, which is model checked for the property.
-
TR2008-910
2008
Pointer Analysis, Conditional Soundness, and Proving the Absence of Errors
Conway, Christopher L.;
Dams, Dennis; Namjoshi, Kedar S.; Barrett, Clark
Abstract
|
PDF
Title: Pointer Analysis, Conditional Soundness, and Proving the Absence of Errors
Author(s): Conway, Christopher L.; Dams, Dennis; Namjoshi, Kedar S.; Barrett, Clark
Abstract:
It is well known that the use of points-to information can substantially improve the accuracyof a static program analysis. Commonly used algorithms for computing points-to information are known to be sound only for memory-safe programs. Thus, it appears problematic to utilize points-to information to verify the memory safety property without giving up soundness. We show that a sound combination is possible, even if the points-to information is computed separately and only conditionally sound. This result is based on a refined statement of the soundness conditions of points-to analyses and a general mechanism for composing conditionally sound analyses.
-
M.S. Thesis
2008
STUMP: Stereo Correspondence in the Cyclopean Eye under Belief Propagation
Distler, George
Abstract
|
PDF
Title: STUMP: Stereo Correspondence in the Cyclopean Eye under Belief Propagation
Candidate: Distler, George
Advisor(s): Geiger, Davi
Abstract:
The human visual system sees at any moment a static scene in three dimensions. This 3D view of the world is acquired by two images, one from the left eye and the other by the right eye. Fusing the left and right stereo pair of images yields a single cyclopean view portraying depth. Stereo vision can be applied to the field of computer vision via calibrated stereo cameras to capture the left and right images. Given a stereo pair of images, one can compute the field of depth via a stereo correspondence algorithm. We present a new approach to computing the disparity (depth) by means the STUMP algorithm.
The STUMP algorithm presents a solution to the stereo correspondence problem. We propose to solve the problem of discontinuities in disparity within epipolar lines by modeling geometric constraints of smooth, tilted, and occluded surfaces as well as unicity and opaqueness. Our algorithm runs upon a framework built upon the BP-TwoGraphs belief propagation estimation [17]. As a result, we provide a disparity map in the cyclopean coordinate system determined by a probability distribution computed in polynomial time.
-
TR2008-912
2008
An Overlapping Schwarz Algorithm for Almost Incompressible Elasticity
Dohrmann, Clark R.;
Widlund, Olof B.
Abstract
|
PDF
Title: An Overlapping Schwarz Algorithm for Almost Incompressible Elasticity
Author(s): Dohrmann, Clark R.; Widlund, Olof B.
Abstract:
Overlapping Schwarz methods are extended to mixed finite element approximations of linear elasticity which use discontinuous pressure spaces. The coarse component of the preconditioner is based on a low-dimensional space previously developed for scalar elliptic problems and a domain decomposition method of iterative substructuring type, i.e., a method based on non-overlapping decompositions of the domain, while the local components of the preconditioner are based on solvers on a set of overlapping subdomains.
A bound is established for the condition number of the algorithm which grows in proportion to the square of the logarithm of the number of degrees of freedom in individual subdomains and the third power of the relative overlap between the overlapping subdomains, and which is independent of the Poisson ratio as well as jumps in the Lam\'e parameters across the interface between the subdomains. A positive definite reformulation of the discrete problem makes the use of the standard preconditioned conjugate gradient method straightforward. Numerical results, which include a comparison with problems of compressible elasticity, illustrate the findings.
-
Ph.D. Thesis
2008
Learning Long-Range Vision for an Offroad Robot
Hadsell, Raia
Abstract
|
PDF
Title: Learning Long-Range Vision for an Offroad Robot
Candidate: Hadsell, Raia
Advisor(s): LeCun, Yann
Abstract:
Teaching a robot to perceive and navigate in an unstructured natural world is a difficult task. Without learning, navigation systems are short-range and extremely limited. With learning, the robot can be taught to classify terrain at longer distances, but these classifiers can be fragile as well, leading to extremely conservative planning. A robust, high-level learning-based perception system for a mobile robot needs to continually learn and adapt as it explores new environments. To do this, a strong feature representation is necessary that can encode meaningful, discriminative patterns as well as invariance to irrelevant transformations. A simple realtime classifier can then be trained on those features to predict the traversability of the current terrain.
One such method for learning a feature representation is discussed in detail in this work. Dimensionality reduction by learning an invariant mapping (DrLIM) is a weakly supervised method for learning a similarity measure over a domain. Given a set of training samples and their pairwise relationships, which can be arbitrarily defined, DrLIM can be used to learn a function that is invariant to complex transformations of the inputs such as shape distortion and rotation.
The main contribution of this work is a self-supervised learning process for long-range vision that is able to accurately classify complex terrain, permitting improved strategic planning. As a mobile robot moves through offroad environments, it learns traversability from a stereo obstacle detector. The learning architecture is composed of a static feature extractor, trained offline for a general yet discriminative feature representation, and an adaptive online classifier. This architecture reduces the effect of concept drift by allowing the online classifier to quickly adapt to very few training samples without overtraining. After experiments with several different learned feature extractors, we conclude that unsupervised or weakly supervised learning methods are necessary for training general feature representations for natural scenes.
The process was developed and tested on the LAGR mobile robot as part of a fully autonomous vision-based navigation system.
-
TR2007-907
2008
Modal Logic, Temporal Models and Neural Circuits: What Connects Them
Kleinberg, Samantha;
Antoniotti, Marco; Ramakrishnan, Naren; Mishra, Bud
Abstract
|
PDF
Title: Modal Logic, Temporal Models and Neural Circuits: What Connects Them
Author(s): Kleinberg, Samantha; Antoniotti, Marco; Ramakrishnan, Naren; Mishra, Bud
Abstract:
Traditional methods for supervised learning involve treating the input data as a set of independent, identically distributed samples. However, in many situations, the samples are related in such a way that variables associated with one sample depend on other samples. We present a new form of relational graphical model that, in addition to capturing the dependence of the output on sample specific features, can also capture hidden relationships among samples through a non-parametric latent manifold.
Learning in the proposed graphical model involves simultaneously learning the non-parametric latent manifold along with a non-relational parametric model. Efficient inference algorithms are introduced to accomplish this task. The method is applied to the prediction of house prices. A non-relational model predicts an ``intrinsic" price of the house which depends only on its individual characteristics, and a relational model estimates a hidden surface of ``desirability'' coefficients which links the price of a house to that of similar houses in the neighborhood.
-
TR2008-914
2008
Extension of Two-level Schwarz Preconditioners to Symmetric Indefinite Problems
Leong, Alan
Abstract
|
PDF
Title: Extension of Two-level Schwarz Preconditioners to Symmetric Indefinite Problems
Author(s): Leong, Alan
Abstract:
Two-level overlapping Schwarz preconditioners are extended for use for a class of large, symmetric, indefinite systems of linear algebraic equations. The focus is on an enriched coarse space with additional basis functions built from free space solutions of the underlying partial differential equation. GMRES is used to accelerate the convergence of preconditioned systems. Both additive and hybrid Schwarz methods are considered and reports are given on extensive numerical experiments.
-
TR2008-911
2008
Nonlinear extraction of 'Independent Components' of elliptically symmetric densities using radial Gaussianization
Lyu, Siwei;
Simoncelli, Eero P.
Abstract
|
PDF
Title: Nonlinear extraction of 'Independent Components' of elliptically symmetric densities using radial Gaussianization
Author(s): Lyu, Siwei; Simoncelli, Eero P.
Abstract:
We consider the problem of efficiently encoding a signal by transforming it to a new representation whose components are statistically independent (also known as factorial). A widely studied family of solutions, generally known as independent components analysis (ICA), exists for the case when the signal is generated as a linear transformation of independent non-Gaussian sources. Here, we examine a complementary case, in which the signal density is non-Gaussian but elliptically symmetric. In this case, no linear transform suffices to properly decompose the signal into independent components, and thus, the ICA methodology fails. We show that a simple nonlinear transformation, which we call radial Gaussianization (RG), provides an exact solution for this case. We then examine this methodology in the context of natural image statistics, demonstrating that joint statistics of spatially proximal coefficients in a multi-scale image representation are better described as elliptical than factorial. We quantify this by showing that reduction in dependency achieved by RG is far greater than that achieved by ICA, for local spatial neighborhoods. We also show that the RG transformation may be closely approximated by divisive normalization transformations that have been used to model the nonlinear response properties of visual neurons, and that have been shown to reduce dependencies between multi-scale image coefficients.
-
Ph.D. Thesis
2008
Synthesizing Executable Programs from Requirements
Plock, Cory
Abstract
|
PDF
Title: Synthesizing Executable Programs from Requirements
Candidate: Plock, Cory
Advisor(s): Goldberg, Benjamin
Abstract:
Automatic generation of correct software from requirements has long been a ``holy grail'' for system and software development. According to this vision, instead of implementing a system and then working hard to apply testing and verification methods to prove system correctness, a system is rather built correctly by construction. This problem, referred to as synthesis, is undecidable in the general case. However, by restricting the domain to decidable subsets, it is possible to bring this vision one step closer to reality.
The focus of our study is reactive systems, or non-terminating programs that continuously receive input from an external environment and produce output responses. Reactive systems are often safety critical and include applications such as anti-lock braking systems, auto-pilots, and pacemakers. One of the challenges of reactive system design is ensuring that the software meets the requirements under the assumption of unpredictable environment input. The behavior of many of these systems can be expressed as regular languages over infinite strings, a domain in which synthesis has yielded successful results.
We present a method for synthesizing executable reactive systems from formal requirements. The object-oriented requirements language of Live Sequence Charts (LSCs) is considered. We begin by establishing a mapping between various subsets of the language and finite-state formal models. We also consider LSCs which can express time constraints over a dense-time domain. From one of these models, we show how to formulate a winning strategy that is guaranteed to satisfy the requirements, provided one exists. The strategy is realized in the form of a controller which guides the system in choosing only non-violating behaviors. We describe an implementation of this work as an extension of an existing tool called the Play-Engine.
-
Ph.D. Thesis
2008
Theory and Algorithms for Modern Machine Learning Problems and an Analysis of Markets
Rastogi, Ashish
Abstract
|
PDF
Title: Theory and Algorithms for Modern Machine Learning Problems and an Analysis of Markets
Candidate: Rastogi, Ashish
Advisor(s): Cole, Richard; Mohri, Mehryar
Abstract:
The unprecedented growth of the Internet over the past decade and of data collection, more generally, has given rise to vast quantities of digital information, ranging from web documents and images, genomic databases to a vast array of business customer information. Consequently, it is of growing importance to develop tools and models that enable us to better understand this data and to design data-driven algorithms that leverage this information. This thesis provides several fundamental theoretical and algorithmic results for tackling such problems with applications to speech recognition, image processing, natural language processing, computational biology and web-based algorithms.
Probabilistic automata provide an efficient and compact way to model sequence- oriented data such as speech or web documents. Measuring the similarity of such automata provides a way of comparing the objects they model, and is an essential first step in organizing this type of data. We present algorithmic and hardness results for computing various discrepancies (or dissimilarities) between probabilistic automata, including the relative entropy and the Lp distance; we also give an efficient algorithm to determine if two probabilistic automata are equivalent. In addition, we study the complexity of computing the norms of probabilistic automata.
Organizing and querying large amounts of digitized data such as images and videos is a challenging task because little or no label information is available. This motivates transduction, a setting in which the learning algorithm can leverage unlabeled data during training to improve performance. We present novel error bounds for a family of transductive regression algorithms and validate their usefulness through experiments.
Widespread success of search engines and information retrieval systems has led to large scale collection of rating information which is being used to provide personalized rankings. We examine an alternate formulation of the ranking problem for search engines motivated bythe requirement that in addition to accurately predicting pairwise ordering, ranking systems must also preserve the magnitude of the preferences or the difference between ratings. We present algorithms with sound theoretical properties, and verify their efficacy through experiments.
Finally, price discovery in a market setting can be viewed as an (ongoing) learning problem. Specifically, the problem is to find and maintain a set of prices that balance supply and demand, a core topic in economics. This appears to involve complex implicit and possibly large-scale information transfers. We show that finding equilibrium prices, even approximately, in discrete markets is NP-hard and complement the hardness result with a matching polynomial time approximation algorithm.We also give a new way of measuring the quality of an approximation to equilibrium prices that is based on a natural aggregation of the dissatisfaction of individual market participants.
-
M.S. Thesis
2008
Measuring biomolecules: an image processing and length estimation pipeline using atomic force microscopy to measure DNA and RNA with high precision
Sundstrom, Andrew
Abstract
|
PDF
Title: Measuring biomolecules: an image processing and length estimation pipeline using atomic force microscopy to measure DNA and RNA with high precision
Candidate: Sundstrom, Andrew
Advisor(s): Mishra, Bud
Abstract:
Background. An important problem in molecular biology is to determine the complete transcription profile of a single cell, a snapshot that shows which genes are being expressed and to what degree. Seen in series as a movie, these snapshots would give direct, specific observation of the cell.s regulation behavior. Taking a snapshot amounts to correctly classifying the cell.s ~300 000 mRNA molecules into ~30 000 species, and keeping accurate count of each species. The cell.s transcription profile may be affected by low abundances (1-5 copies) of certain mRNAs; thus, a sufficiently sensitive technique must be employed. A natural choice is to use atomic force microscopy (AFM) to perform single-molecule analysis. Reed et al. ("Single molecule transcription profiling with AFM", Nanotechnology , 18:4 , 2007) developed such an analysis that classifies each mRNA by first multiply cleaving its corresponding synthesized cDNA with a restriction enzyme, then constructing its classification label from ratios of the lengths of its resulting fragments. Thus, they showed the transcription profiling problem reduces to making high-precision measurements of cDNA backbone lengths . correct to within 20-25 bp (6-7.5 nm).
Contribution. We developed an image processing and length estimation pipeline using AFM that can achieve these measurement tolerances. In particular, we developed a biased length estimator using James-Stein shrinkage on trained coefficients of a simple linear regression model, a formulation that subsumes the models we studied.
Methods. First, AFM images were processed to extract molecular objects, skeletonize them, select proper backbone objects from the skeletons, then compute initial lengths of the backbones. Second, a linear regression model was trained on a subset of molecules of known length, namely their computed image feature quantities. Third, the model.s coefficients underwent James-Stein shrinkage to create a biased estimator. Fourth, the trained and tuned model was applied to the image feature quantities computed for each test molecule, giving its final, corrected backbone length.
Results. Training data: one monodisperse set of cDNA molecules of theoretical length 75 nm. Test data: two monodisperse sets of cDNA molecules of unknown length. Corrected distributions of molecular backbone lengths were within 6-7.5 nm from the theoretical lengths of the unknowns, once revealed.
Conclusions. The results suggest our pipeline can be employed in the framework specified by Reed et al. to render single-molecule transcription profiles. The results reveal a high degree of systematic error in AFM measurements that suggests image processing alone is insufficient to achieve a much higher measurement accuracy.
-
Ph.D. Thesis
2008
Geometric Modeling with High Order Derivatives
Tosun, Elif
Abstract
|
PDF
Title: Geometric Modeling with High Order Derivatives
Candidate: Tosun, Elif
Advisor(s): Zorin, Denis
Abstract:
Modeling of high quality surfaces is the core of geometric modeling. Such models are used in many computer-aided design and computer graphics applications. Irregular behavior of higher-order differential parameters of the surface (e.g. curvature variation) may lead to aesthetic or physical imperfections. In this work, we consider approaches to constructing surfaces with high degree of smoothness.
One direction is based on a manifold-based surface definition which ensures well-defined high-order derivatives that can be explicitly computed at any point. We extend previously proposed manifold-based construction to surfaces with piecewise-smooth boundary and explore trade-offs in some elements of the construction. We show that growth of derivative magnitudes with order is a general property of constructions with locally supported basis functions and derive a lower bound for derivative growth and numerically study flexibility of resulting surfaces at arbitrary points.
An alternative direction to using high-order surfaces is to define an approximation to high-order quantities for meshes, with high-order surface implicit. These approximations do not necessarily converge point-wise, but can nevertheless be successfully used to solve surface optimization problems. Even though fourth-order problems are commonly solved to obtain high quality surfaces, in many cases, these formulations may lead to reflection-line and curvature discontinuities. We consider two approaches to further increasing control over surface properties.
The first approach is to consider data-dependent functionals leading to fourth-order problems but with explicit control over desired surface properties. Our fourth-order functionals are based on reflection line behavior. Reflection lines are commonly used for surface interrogation and high-quality reflection line patterns are well-correlated with high-quality surface appearance. We demonstrate how these can be discretized and optimized accurately and efficiently on general meshes.
A more direct approach is to consider a poly-harmonic function on a mesh, such as the fourth-order biharmonic or the sixth-order triharmonic. The biharmonic and the triharmonic equations can be thought of as a linearization of curvature and curvature variation Euler-Lagrange equations respectively. We present a novel discretization for both problems based on the mixed finite element framework and a regularization technique for solving the resulting, highly ill-conditioned systems of equations. We show that this method, compared to more ad-hoc discretizations, has higher degree of mesh independence and yields surfaces of better quality.
-
TR2007-903
2007
An Efficient Reduction of Ranking to Classification
Ailon, Nir;
Mohri, Mehryar
Abstract
|
PDF
Title: An Efficient Reduction of Ranking to Classification
Author(s): Ailon, Nir; Mohri, Mehryar
Abstract:
This paper describes an efficient reduction of the learning problem of ranking to binary classification. As with a recent result of Balcan et al. (2007), the reduction guarantees an average pairwise misranking regret of at most \(2r\) using a binary classifier with regret \(r\). However, our reduction applies to a broader class of ranking loss functions, admits a simpler proof, and the expected running time complexity of our algorithm in terms of number of calls to a classifier or preference function is improved from \(\Omega(n2)\) to \(O(n \log n)\). Furthermore, when the top \(k\) ranked elements only are required (\(k \ll n\)), as in many applications in information extraction or search engines, the time complexity of our algorithm can be further reduced to \(O(k \log k + n)\). Our reduction and algorithm are thus practical for realistic applications where the number of points to rank exceeds several thousands. Much of our results also extend beyond the bipartite case previously studied.
-
TR2007-902
2007
N-Way Composition of Weighted Finite-State Transducers
Allauzen, Cyril;
Mohri, Mehryar
Abstract
|
PDF
Title: N-Way Composition of Weighted Finite-State Transducers
Author(s): Allauzen, Cyril; Mohri, Mehryar
Abstract:
Composition of weighted transducers is a fundamental algorithm used in many applications, including for computing complex edit-distances between automata, or string kernels in machine learning, or to combine different components of a speech recognition, speech synthesis, or information extraction system. We present a generalization of the composition of weighted transducers, \emph{$n$-way composition}, which is dramatically faster in practice than the standard composition algorithm when combining more than two transducers. The expected worst-case complexity of our algorithm for composing three transducers $T_1$, $T_2$, and $T_3$\ignore{ depending on the strategy used, is $O(|T_1|_E|T_2|_Q|T_3|_E + |T|)$ or $(|T_1|_Q|T_2|_E|T_3|_Q + |T|)$, } is $O(\min(|T_1|_E|T_2|_Q|T_3|_E, |T_1|_Q|T_2|_E|T_3|_Q) + |T|)$, where $T$ is the result of that composition and $|T_i| = |T_i|_Q + |T_i|_E$ with $|T_i|_Q$ the number of states and $|T_i|_E$ the number of transitions of $T_i$, $i = 1, 2, 3$. In many cases, this significantly improves on the complexity of standard composition. Our algorithm also leads to a dramatically faster composition in practice. Furthermore, standard composition can be obtained as a special case of our algorithm. We report the results of several experiments demonstrating this improvement. These theoretical and empirical improvements significantly enhance performance in the applications already mentioned.
-
Ph.D. Thesis
2007
Scaling Data Servers via Cooperative Caching
Annapureddy, Siddhartha
Abstract
|
PDF
Title: Scaling Data Servers via Cooperative Caching
Candidate: Annapureddy, Siddhartha
Advisor(s): Mazieres, David
Abstract:
In this thesis, we present design techniques -- and systems that illustrate and validate these techniques -- for building data-intensive applications over the Internet. We enable the use of a traditional bandwidth-limited server in these applications. A large number of cooperating users contribute resources such as disk space and network bandwidth, and form the backbone of such applications. The applications we consider fall in one of two categories. The first type provide user-perceived utility in proportion to the data download rates of the participants; bulk data distribution systems is a typical example. The second type are usable only when the participants have data download rates above a certain threshold; video streaming is a prime example.
We built Shark, a distributed file system, to address the first type of applications. It is designed for large-scale, wide-area deployment, while also providing a drop-in replacement for local-area file systems. Shark introduces a novel locality-aware cooperative-caching mechanism, in which clients exploit each other's file caches to reduce load on an origin file server. Shark also enables sharing of data even when it originates from different servers. In addition, Shark clients are mutually distrustful in order to operate in the wide-area. Performance results show that Shark greatly reduces server load and reduces client-perceived latency for read-heavy workloads both in the wide and local areas.
We built RedCarpet, a near-Video-on-Demand (nVoD) system, to address the second type of applications. nVoD allows a user to watch a video starting at any point after waiting for a small setup time. RedCarpet uses a mesh-based peero-peer (P2P) system to provide the nVoD service. In this context, we study the problem of scheduling the dissemination of chunks that constitute a video. We show that providing nVoD is feasible with a combination of techniques that include network coding, avoiding resource starvation for different chunks, and overay topology management algorithms. Our evaluation, using a simulator as well as a prototype, shows that systems that do not optimize in all these dimensions could deliver significantly worse nVoD performance.
-
Ph.D. Thesis
2007
Shape Analysis by Abstraction, Augmentation, and Transformation
Balaban, Ittai
Abstract
|
PDF
Title: Shape Analysis by Abstraction, Augmentation, and Transformation
Candidate: Balaban, Ittai
Advisor(s): Pnueli, Amir; Zuck, Lenore
Abstract:
The goal of shape analysis is to analyze properties of programs that perform destructive updates of linked structures (heaps). This thesis presents an approach for shape analysis based on program augmentation (instrumentation), predicate abstraction, and model checking, that allows for verification of safety and liveness properties (which, for sequential programs, usually corresponds to program invariance and termination).
One of the difficulties in abstracting heap-manipulating programs is devising a decision procedure for a sufficiently expressive logic of graph properties. Since graph reachability (expressible by transitive closure) is not a first order property, the challenge is in showing that a decision procedure exists for a rich enough subset of first order logic with transitive closure.
Predicate abstraction is in general too weak to verify liveness properties. Thus an additional issue dealt with is how to perform abstraction while retaining enough information. The method presented here is domain-neutral, and applies to concurrent programs as well as sequential ones.
-
TR2007-887
2007
Magnitude-Preserving Ranking Algorithms
Cortes, Corinna;
Mohri, Mehryar; Rastogi, Ashish
Abstract
|
PDF
Title: Magnitude-Preserving Ranking Algorithms
Author(s): Cortes, Corinna; Mohri, Mehryar; Rastogi, Ashish
Abstract:
This paper studies the learning problem of ranking when one wishes not just to accurately predict pairwise ordering but also preserve the magnitude of the preferences or the difference between ratings, a problem motivated by its crucial importance in the design of search engines, movie recommendation, and other similar ranking systems. We describe and analyze several algorithms for this problem and give stability bounds for their generalization error, extending previously known stability results to non- bipartite ranking and magnitude of preference-preserving algorithms. We also report the results of experiments comparing these algorithms on several datasets and contrast these results with those obtained using an AUC-maximization algorithm.
-
TR2007-886
2007
On the Computation of the Relative Entropy of Probabilistic Automata
Cortes, Corinna;
Mohri, Mehryar; Rastogi, Ashish; Riley, Michael
Abstract
|
PDF
Title: On the Computation of the Relative Entropy of Probabilistic Automata
Author(s): Cortes, Corinna; Mohri, Mehryar; Rastogi, Ashish; Riley, Michael
Abstract:
We present an exhaustive analysis of the problem of computing the relative entropy of two probabilistic automata. We show that the problem of computing the relative entropy of unambiguous probabilistic automata can be formulated as a shortest-distance problem over an appropriate semiring, give efficient exact and approximate algorithms for its computation in that case, and report the results of experiments demonstrating the practicality of our algorithms for very large weighted automata. We also prove that the computation of the relative entropy of arbitrary probabilistic automata is PSPACE-complete.
The relative entropy is used in a variety of machine learning algorithms and applications to measure the discrepancy of two distributions. We examine the use of the symmetrized relative entropy in machine learning algorithms and show that, contrarily to what is suggested by a number of publications, the symmetrized relative entropy is neither positive definite symmetric nor negative definite symmetric, which limits its use and application in kernel methods. In particular, the convergence of training for learning algorithms is not guaranteed when the symmetrized relative entropy is used directly as a kernel, or as the operand of an exponential as in the case of Gaussian Kernels.
Finally, we show that our algorithm for the computation of the entropy of an unambiguous probabilistic automaton can be generalized to the computation of the norm of an unambiguous probabilistic automaton by using a monoid morphism. In particular, this yields efficient algorithms for the computation of the Lp -norm of a probabilistic automaton.
-
TR2007-888
2007
Domain Decomposition for Less Regular Subdomains: Overlapping Schwarz in Two Dimensions
Dohrmann, Clark R.;
Klawonn, Axel; Widlund, Olof B.
Abstract
|
PDF
Title: Domain Decomposition for Less Regular Subdomains: Overlapping Schwarz in Two Dimensions
Author(s): Dohrmann, Clark R.; Klawonn, Axel; Widlund, Olof B.
Abstract:
In the theory of domain decomposition methods, it is often assumed that each subdomain is the union of a small set of coarse triangles or tetrahedra. In this study, extensions to the existing theory which accommodates subdomains with much less regular shape are presented; the subdomains are only required to be John domains. Attention is focused on overlapping Schwarz preconditioners for problems in two dimensions with a coarse space component of the preconditioner which allows for good results even for coefficients which vary considerably. It is shown that the condition number of the domain decomposition method is bounded by C(1 + H/δ)(1 + log(H/h))
2
, where the constant C independent of the number of subdomains and possible jumps in coefficients between subdomains. Numerical examples are provided which confirm the theory and demonstrate very good performance of the method for a variety of subregions including those obtained when a mesh partitioner is used for the domain decomposition.
-
Ph.D. Thesis
2007
Democratizing Content Distribution
Freedman, Michael
Abstract
|
PDF
Title: Democratizing Content Distribution
Candidate: Freedman, Michael
Advisor(s): Mazieres, David
Abstract:
In order to reach their large audiences, today's Internet publishers primarily use content distribution networks (CDNs) to deliver content. Yet the architectures of the prevalent commercial systems are tightly bound to centralized control, static deployments, and trusted infrastructure, inherently limiting their scope and scale to ensure cost recovery.
To move beyond such shortcomings, this thesis contributes a number of techniques that realize cooperative content distribution. By federating large numbers of unreliable or untrusted hosts, we can satisfy the demand for content by leveraging all available resources. We propose novel algorithms and architectures for three central mechanisms of CDNs: content discovery (where are nearby copies of the client's desired resource?), server selection (which node should a client use?), and secure content transmission (how should a client download content efficiently and securely from its multiple potential sources?).
These mechanisms have been implemented, deployed, and tested in production systems that have provided open content distribution services for more than three years. Every day, these systems answer tens of millions of client requests, serving terabytes of data to more than a million people.
This thesis presents five systems related to content distribution. First, Coral provides a distributed key-value index that enables content lookups to occur efficiently and returns references to nearby cached objects whenever possible, while still preventing any load imbalances from forming. Second, CoralCDN demonstrates how to construct a self-organizing CDN for web content out of unreliable nodes, providing robust behavior in the face of failures. Third, OASIS provides a general-purpose, flexible anycast infrastructure, with which clients can locate nearby or unloaded instances of participating distributed systems. Fourth, as a more clean-slate design that can leverage untrusted participants, Shark offers a distributed file system that supports secure block-based file discovery and distribution. Finally, our authentication code protocol enables the integrity verification of large files on-the-fly when using erasure codes for efficient data dissemination.
Taken together, this thesis provides a novel set of tools for building highly-scalable, efficient, and secure content distribution systems. By enabling the automated replication of data based on its popularity, we can make desired content available and accessible to everybody. And in effect, democratize content distribution.
-
TR2007-905
2007
Declarative Syntax Tree Engineering* Or, One Grammar to Rule Them All
Grimm, Robert
Abstract
|
PDF
Title: Declarative Syntax Tree Engineering* Or, One Grammar to Rule Them All
Author(s): Grimm, Robert
Abstract:
Grammars for many parser generators not only specify a language's syntax but also the corresponding syntax tree. Unfortunately, most parser generators pick a somewhat arbitrary combination of features from the design space for syntax trees and thus lock in specific trade-offs between expressivity, safety, and performance. This paper discusses the three major axes of the design space---specification within or outside a grammar, concrete or abstract syntax trees, and dynamically or statically typed trees---and their impact. It then presents algorithms for automatically realizing all major choices from the same, unmodified grammar with inline syntax tree declarations. In particular, this paper shows how to automatically (1) extract a separate syntax tree specification, (2) embed an abstract syntax tree within a concrete one, and (3) infer a strongly typed view on a dynamically typed tree. All techniques are implemented in the Rats! parser generator and have been applied to real-world C and Java grammars and their syntax trees.
-
TR2007-904
2007
Typical: Taking the Tedium Out of Typing
Grimm, Robert;
Harris, Laune; Le, Anh
Abstract
|
PDF
Title: Typical: Taking the Tedium Out of Typing
Author(s): Grimm, Robert; Harris, Laune; Le, Anh
Abstract:
The implementation of real-world type checkers requires a non-trivial engineering effort. The resulting code easily comprises thousands of lines, which increases the probability of software defects in a component critical to compiler correctness. To make type checkers easier to implement and extend, this paper presents Typical, a domain-specific language and compiler that directly and concisely captures the structure of type systems. Our language builds on the functional core for ML to represent syntax trees and types as variants and to traverse them with pattern matches. It then adds declarative constructs for common type checker concerns, such as scoping rules, namespaces, and constraints on types. It also integrates error checking and reporting with other constructs to promote comprehensive error management. We have validated our system with two real-world type checkers written in Typical, one for Typical itself and the other for C.
-
Ph.D. Thesis
2007
Joint Inference for Information Extraction and Translation
Ji, Heng
Abstract
|
PDF
Title: Joint Inference for Information Extraction and Translation
Candidate: Ji, Heng
Advisor(s): Grishman, Ralph
Abstract:
The traditional natural language processing pipeline incorporates multiple stages of linguistic analysis. Although errors are typically compounded through the pipeline, it is possible to reduce the errors in one stage by harnessing the results of the other stages.
This thesis presents a new framework based on component interactions to approach this goal. The new framework applies all stages in a suitable order, with each stage generating multiple hypotheses and propagating them through the whole pipeline. Then the feedback from subsequent stages is used to enhance the target stage by re-ranking these hypotheses, and then produce the best analysis.
The effectiveness of this framework has been demonstrated by substantially improving the performance of Chinese and English entity extraction and Chinese-to-English entity translation. The inference knowledge includes mono-lingual interactions among information extraction stages such as name tagging, coreference resolution, relation extraction and event extraction, as well as cross-lingual interaction between information extraction and machine translation.
Such symbiosis of analysis components allows us to incorporate information from a much wider context, spanning the entire document and even going across documents, and utilize deeper semantic analysis; it will therefore be essential for the creation of a high- performance NLP pipeline.
-
TR2007-889
2007
An analysis of a FETI--DP algorithm on irregular subdomains in the plane
Klawonn, Axel;
Rheinbach, Oliver; Widlund, Olof B.
Abstract
|
PDF
Title: An analysis of a FETI--DP algorithm on irregular subdomains in the plane
Author(s): Klawonn, Axel; Rheinbach, Oliver; Widlund, Olof B.
Abstract:
In the theory for domain decomposition algorithms of the iterative substructuring family, each subdomain is typically assumed to be the union of a few coarse triangles or tetrahedra. This is an unrealistic assumption, in particular, if the subdomains result from the use of a mesh partitioner in which case they might not even have uniformly Lipschitz continuous boundaries.
The purpose of this study is to derive bounds for the condition number of these preconditioned conjugate gradient methods which depend only on a parameter in an isoperimetric inequality and two geometric parameters characterizing John and uniform domains. A related purpose is to explore to what extent well known technical tools previously developed for quite regular subdomains can be extended to much more irregular subdomains.
Some of these results are valid for any John domains, while an extension theorem, which is needed in this study, requires that the subdomains are uniform. The results, so far, are only complete for problems in two dimensions. Details are worked out for a FETI--DP algorithm and numerical results support the findings. Some of the numerical experiments illustrate that care must be taken when selecting the scaling of the preconditioners in the case of irregular subdomains.
-
M.S. Thesis
2007
Degeneracy Proof Predicates for the Additively Weighted Voronoi Diagram
Millman, David
Abstract
|
PDF
Title: Degeneracy Proof Predicates for the Additively Weighted Voronoi Diagram
Candidate: Millman, David
Advisor(s): Yap, Chee
Abstract:
This thesis focuses on the Additively Weighted Voronoi diagram. It begins by presenting the history of the diagram and some of the early algorithms used for its generation [OBSC00, Aur91]. The paper then addresses the more recent incremental approach to calculating the diagram, as seen in the 2D Apollonius Graphs (Delaunay Graphs of Disks) package of CGAL [KY06]. Next, the algorithm of Boissonnat et al. [BD05] for calculating Convex Hulls is presented. We then apply the predicates presented by Bossonnat to the CGAL implementation of the AW-Voronoi diagram, and the results are discussed. The main contribution of this paper results in predicates of the AW-Voronoi diagram, with a lower algebraic degree which also handle degeneracies in such a way as to produce a conical result.
-
M.S. Thesis
2007
Cellstorm: A bioinformatics software system to visualize subcellular networks
Neves, Ana
Abstract
|
PDF
Title: Cellstorm: A bioinformatics software system to visualize subcellular networks
Candidate: Neves, Ana
Advisor(s): Shasha, Dennis
Abstract:
Cellstorm is a software system that allows a rapid visualization of genes and subcellular networks. Given a set of genes, expression levels, structural hierarchy and network's data, Cellstorm displays the networks at any level of the hierarchy and provides a set of user options such as zooming, network selection and list filtering.
Although Cellstorm is mainly aimed at biological applications, it can be used in any field that needs to display networks. Cellstorm achieves this by avoiding application-specific assumptions.
-
Ph.D. Thesis
2007
Authentication Mechanisms for Open Distributed Systems
Nicolosi, Antonio
Abstract
|
PDF
Title: Authentication Mechanisms for Open Distributed Systems
Candidate: Nicolosi, Antonio
Advisor(s): Mazieres, David; Shoup, Victor
Abstract:
While authentication within organizations is a well-understood problem, traditional solutions are often inadequate at the scale of the Internet, where the lack of a central authority, the open nature of the systems, and issues such as privacy and anonymity create new challenges. For example, users typically establish dozens of web accounts with independently administered services under a single password, which increases the likelihood of exposure of their credentials; users wish to receive email from anyone who is not a spammer, but the openness of the email infrastructure makes it hard to authenticate legitimate senders; users may have a rightful expectation of privacy when viewing widely-accessed protected resources such as premium website content, yet they are commonly required to present identifying login credentials, which permits tracking of their access patterns.
This dissertation describes enhanced authentication mechanisms to tackle the challenges of each of the above settings. Specifically, the dissertation develops: 1) a remote authentication architecture that lets users recover easily in case of password compromise; 2) a social network-based email system in which users can authenticate themselves as trusted senders without disclosing all their social contacts; and 3) a group access-control scheme where requests can be monitored while affording a degree of anonymity to the group member performing the request.
The proposed constructions combine system designs and novel cryptographic techniques to address their respective security and privacy requirements both effectively and efficiently.
-
Ph.D. Thesis
2007
New Design Criteria for Hash Functions and Block Ciphers
Puniya, Prashant
Abstract
|
PDF
Title: New Design Criteria for Hash Functions and Block Ciphers
Candidate: Puniya, Prashant
Advisor(s): Dodis, Yevgeniy
Abstract:
Cryptographic primitives, such as hash functions and block ciphers, are integral components in several practical cryptographic schemes. In order to prove security of these schemes, a variety of security assumptions are made on the underlying hash function or block cipher, such as collision-resistance, pseudorandomness etc. In fact, such assumptions are often made without much regard for the actual constructions of these primitives. In this thesis, we address this problem and suggest new, and possibly better, design criteria for hash functions and block ciphers.
We start by analyzing the design criteria underlying hash functions. The usual design principle here involves a two-step procedure: First, come up with a heuristically-designed and ``hopefully strong'' fixed-length input construction (i.e. the compression function), then use a standard domain extension technique, usually the cascade construction, to get a construction that works for variable-length inputs. We investigate this design principle from two perspectives:
- To instantiate the Random Oracle. We suggest modifications to existing constructions that make the resulting construction secure as a random oracle, with appropriate assumptions on the underlying compression function.
- In general, we look for ``black-box'' fixes to existing hash functions to get secure constructions for each of the common security notions required of hash functions. We also give suggestions for appropriate modes for using existing hash functions along these lines.
We next move on to discuss the Feistel network, which is used in the design of several popular block ciphers such as DES, Triple-DES, Blowfish etc. Currently, the celebrated result of Luby-Rackoff (and further extensions) is regarded as the theoretical basis for using this construction in block cipher design, where it was shown that a four-round Feistel network is a (strong) pseudorandom permutation (PRP) if the round functions are independent pseudorandom functions (PRFs). We study the Feistel network from two different perspectives:
- Is there a weaker security notion for round functions, than pseudorandomness, that suffices to prove security of the Feistel network?
- Can the Feistel network satisfy a much stronger security notion, i.e. security as an ideal cipher, under appropriate assumptions on the round functions?
We give a positive answer to the first question and a partial positive answer to the second question. In the process, we undertake a combinatorial study of the Feistel network, that might be useful in other scenarios as well. We provide several practical applications of our results for the Feistel network.
-
TR2007-900
2007
Empirical Bayes least squares estimation without an explicit prior
Raphan, Martin;
Simoncelli, Eero P.
Abstract
|
PDF
Title: Empirical Bayes least squares estimation without an explicit prior
Author(s): Raphan, Martin; Simoncelli, Eero P.
Abstract:
Bayesian estimators are commonly constructed using an explicit prior model. In many applications, one does not have such a model, and it is difficult to learn since one does not have access to uncorrupted measurements of the variable being estimated. In many cases however, including the case of contamination with additive Gaussian noise, the Bayesian least squares estimator can be formulated directly in terms of the distribution of noisy measurements. We demonstrate the use of this formulation in removing noise from photographic images. We use a local approximation of the noisy measurement distribution by exponentials over adaptively chosen intervals, and derive an estimator from this approximate distribution. We demonstrate through simulations that this adaptive Bayesian estimator performs as well or better than previously published estimators based on simple prior models.
-
TR2007-901
2007
DNA Hash Pooling and its Applications
Shasha, Dennis;
Amos, Martyn
Abstract
|
PDF
Title: DNA Hash Pooling and its Applications
Author(s): Shasha, Dennis; Amos, Martyn
Abstract:
In this paper we describe a new technique for the characterisation of populations of DNA strands. Such tools are vital to the study of ecological systems, at both the micro (e.g., individual humans) and macro (e.g., lakes) scales. Existing methods make extensive use of DNA sequencing and cloning, which can prove costly and time consuming. The overall objective is to address questions such as: (i) (Genome detection) Is a known genome sequence present at least in part in an environmental sample? (ii) (Sequence query) Is a specific fragment sequence present in a sample? (iii) (Similarity Discovery) How similar in terms of sequence content are two unsequenced samples?
We propose a method involving multiple filtering criteria that result in ``pools" of DNA of high or very high purity. Because our method is similar in spirit to hashing in computer science, we call the method {\it DNA hash pooling}. To illustrate this method, we describe examples using pairs of restriction enzymes. The {\it in silico} empirical results we present reflect a sensitivity to experimental error. The method requires minimal DNA sequencing and, when sequencing is required, little or no cloning.
-
Ph.D. Thesis
2007
Being Lazy and Preemptive at Learning toward Information Extraction
Shinyama, Yusuke
Abstract
|
PDF
Title: Being Lazy and Preemptive at Learning toward Information Extraction
Candidate: Shinyama, Yusuke
Advisor(s): Sekine, Satoshi
Abstract:
This thesis proposes a novel approach for exploring Information Extraction scenarios. Information Extraction, or IE, is a task aiming at finding events and relations in natural language texts that meet a user's demand. However, it is often difficult to formulate, or even define such events that satisfy both a user's need and technical feasibility. Furthermore, most existing IE systems need to be tuned for a new scenario with proper training data in advance. So a system designer usually needs to understand what a user wants to know in order to maximize the system performance, while the user has to understand how the system will perform in order to maximize his/her satisfaction.
In this thesis, we focus on maximizing the variety of scenarios that the system can handle instead of trying to improve the accuracy of a particular scenario. In traditional IE systems, a relation is defined a priori by a user and is identified by a set of patterns that are manually crafted or acquired in advance. We propose a technique called Unrestricted Relation Discovery, which defers determining what is a relation and what is not until the very end of the processing so that a relation can be defined a posteriori. This laziness gives huge flexibility to the types of relations the system can handle. Furthermore, we use the notion of recurrent relations to measure how useful each relation is. This way, we can discover new IE scenarios without fully specifying definitions or patterns, which leads to Preemptive Information Extraction, where a system can provide a user a portfolio of extractable relations and let the user choose them.
We used one year news articles obtained from the Web as a development set. We discovered dozens of scenarios that are similar to the existing scenarios tried by many IE systems, as well as new scenarios that are relatively novel. We have evaluated the existing scenarios with Automatic Content Extraction (ACE) event corpus and obtained reasonable performance. We believe this system will shed new light on IE research by giving various experimental IE scenarios.
-
Ph.D. Thesis
2007
Constituent Parsing by Classification
Turian, Joseph
Abstract
|
PDF
Title: Constituent Parsing by Classification
Candidate: Turian, Joseph
Advisor(s): Melamed, I. Dan
Abstract:
We present an approach to constituent parsing, which is driven by classifiers induced to minimize a single regularized objective. It is the first discriminatively-trained constituent parser to surpass the Collins (2003) parser without using a generative model. Our primary contribution is simplifying the human effort required for feature engineering. Our model can incorporate arbitrary features of the input and parse state. Feature selection and feature construction occur automatically, as part of learning. We define a set of fine-grained atomic features, and let the learner induce informative compound features. Our learning approach includes several novel approximations and optimizations which improve the efficiency of discriminative training. We introduce greedy completion, a new agenda-driven search strategy designed to find low-cost solutions given a limit on search effort. The inference evaluation function was learned accurately enough to guide the deterministic parsers to the optimal parse reasonably quickly without pruning, and thus without search errors. Experiments demonstrate the flexibility of our approach, which has also been applied to machine translation (Wellington et. al, AMTA 2006; Turian et al., NIPS 2007).
-
Ph.D. Thesis
2007
Enhanced Security Models for Network Protocols
Walfish, Shabsi
Abstract
|
PDF
Title: Enhanced Security Models for Network Protocols
Candidate: Walfish, Shabsi
Advisor(s): Dodis, Yevgeniy
Abstract:
Modeling security for protocols running in the complex network environment of the Internet can be a daunting task. Ideally, a security model for the Internet should provide the following guarantee: a protocol that "securely" implements a particular task specification will retain all the same security properties as the specification itself, even when an arbitrary set of protocols runs concurrently on the same network. This guarantee must hold even when other protocols are maliciously designed to interact badly with the analyzed protocol, and even when the analyzed protocol is composed with other protocols. The popular Universal Composability (UC) security framework aims to provide this guarantee.
Unfortunately, such strong security guarantees come with a price: they are impossible to achieve without the use of some trusted setup. Typically, this trusted setup is global in nature, and takes the form of a Public Key Infrastructure (PKI) and/or a Common Reference String (CRS). However, the current approach to modeling security in the presence of such setups falls short of providing expected security guarantees. A quintessential example of this phenomenon is the deniability concern: there exist natural protocols that meet the strongest known security notions (including UC) while failing to provide the same deniability guarantees that their task specifications imply they should provide.
We introduce the Generalized Universal Composability (GUC) framework to extend the UC security notion and enable the re-establishment of its original intuitive security guarantees even for protocols that use global trusted setups. In particular, GUC enables us to guarantee that secure protocols will provide the same level of deniability as the task specification they implement. To demonstrate the usefulness of the GUC framework, we first apply it to the analysis and construction of deniable authentication protocols. Building upon such deniable authentication protocols, we then prove a general feasibility result showing how to construct protocols satisfying our security notion for a large class of two-party and multi-party tasks (assuming the availability of some reasonable trusted setup). Finally, we highlight the practical applicability of GUC by constructing efficient protocols that securely instantiate two common cryptographic tasks: commitments and zero-knowledge proofs.
-
Ph.D. Thesis
2007
Tree-Structured Models of Multitext: Theory, Design and Experiments
Wellington, Benjamin
Abstract
|
PDF
Title: Tree-Structured Models of Multitext: Theory, Design and Experiments
Candidate: Wellington, Benjamin
Advisor(s): Melamed, I. Dan
Abstract:
Statistical machine translation (SMT) systems use empirical models to simulate the act of human translation between language pairs. This dissertation surveys the ability of currently popular syntax-aware SMT systems to model real-world multitext, and shows different types of linguistic phenomena occurring in natural language translation that these popular systems cannot capture. It then proposes a new grammar formalism, Generalized Multitext Grammar (GMTG), and a generalization of Chomsky Normal Form, that allows us to build an efficient SMT system using previously developed parsing techniques. The dissertation addresses many software engineering issues that arise when doing syntax-based SMT using large corpora and lays out a object-oriented design for a translation toolkit. Using the toolkit, we show that a tree-transduction based SMT system, which uses modern machine learning algorithms, outperforms a generative baseline.
-
Ph.D. Thesis
2007
Formal Verification Using Static and Dynamic Analyses
Zaks, Aleksandr
Abstract
|
PDF
Title: Formal Verification Using Static and Dynamic Analyses
Candidate: Zaks, Aleksandr
Advisor(s): Pnueli, Amir
Abstract:
One of the main challenges of formal verification is the ability to handle systems of realistic size, which is especially exacerbated in the context of software verification. In this dissertation, we suggest two related approaches that, while both rely on formal method techniques, they can still be applied to larger practical systems. The scalability is mainly achieved by restricting the types of properties we are considering and guarantees that are given.
Our first approach is a novel run-time monitoring framework. Unlike previous work on this topic, we expect the properties to be specified using Property Specification Language (PSL). PSL is a newly adopted IEEE P1850 standard and is an extension of Linear Temporal Logic (LTL). The new features include regular expressions and finite trace semantics, which make the new logic very attractive for run-time monitoring of both software and hardware designs. To facilitate the new logic we have extended the existing algorithm for LTL tester construction to cover the PSL specific operators. Another novelty of our approach is the ability to use partial information about the program that is being monitored while the existing tools only use the information about the observed trace and the property under consideration. This allows going beyond the focus of traditional run-time monitoring tools -- error detection in the execution trace, towards the focus of static analysis -- bug detection in programs.
In our second approach, we employ static analysis to compute SAT-based function summaries to detect invalid pointer accesses. To compute function summaries, we propose new techniques for improving the precision and performance in order to reduce the false error rates. In particular, we use BDDs to represent a symbolic simulation of functions, where BDDs allow an efficient representation of path-sensitive information and high level simplification. In addition, we use light-weight range analysis technique for determining lower and upper bounds for program variables, which can further offload the work form the SAT solver. Note that while in our current implementation the analysis happens at compile time, we can also use the function summaries as a basis for run-time monitoring.
-
TR2006-880
2006
A Unified Construction of the Glushkov, Follow, and Antimirov Automata
Allauzen, Cyril;
Mohri, Mehryar
Abstract
|
PDF
Title: A Unified Construction of the Glushkov, Follow, and Antimirov Automata
Author(s): Allauzen, Cyril; Mohri, Mehryar
Abstract:
Many techniques have been introduced in the last few decades to create ε-free automata representing regular expressions: Glushkov automata, the so-called follow automata, and Antimirov automata. This paper presents a simple and unified view of all these ε-free automata both in the case of unweighted and weighted regular expressions.It describes simple and general algorithms with running time complexities at least as good as that of the best previously known techniques, and provides concise proofs.The construction methods are all based on two standard automata algorithms: epsilon-removal and minimization. This contrasts with the multitude of complicated and special-purpose techniques and proofs put forward by others to construct these automata. Our analysis provides a better understanding of ε-free automata representing regular expressions: they are all the results of the application of some combinations of epsilon-removal and minimization to the classical Thompson automata. This makes it straight forward to generalize these algorithms to the weighted case, which also results in much simpler algorithms than existing ones. For weighted regular expressions over a closed semiring, we extend the notion of follow automata to the weighted case. We also present the first algorithm to compute the Antimirov automata in the weighted case.
-
TR2006-884
2006
Invisible Safety of Distributed Protocols
Balaban, Ittai;
Pnueli, Amir; Zuck, Lenore
Abstract
|
PDF
Title: Invisible Safety of Distributed Protocols
Author(s): Balaban, Ittai; Pnueli, Amir; Zuck, Lenore
Abstract:
The method of ``Invisible Invariants'' has been applied successfully to protocols that assume a ``symmetric'' underlying topology, be it cliques, stars, or rings. In this paper we show how the method can be applied to proving safety properties of distributed protocols running under arbitrary topologies. Many safety properties of such protocols have reachability predicates, which, on first glance, are beyond the scope of the Invisible Invariants method. To overcome this difficulty, we present a technique, called ``coloring,'' that allows, in many instances, to replace the second order reachability predicates by first order predicates, resulting in properties that are amenable to Invisible Invariants, where ``reachable'' is replaced by ``colored.'' We demonstrate our techniques on several distributed protocols, including a variant on Luby's Maximal Independent Set protocol, the Leader Election protocol used in the IEEE 1394 (Firewire) distributed bus protocol, and various distributed spanning tree algorithms. All examples have been tested using the symbolic model checker TLV.
-
TR2006-885
2006
Shape Analysis of Single-Parent Heaps
Balaban, Ittai;
Pnueli, Amir; Zuck, Lenore
Abstract
|
PDF
Title: Shape Analysis of Single-Parent Heaps
Author(s): Balaban, Ittai; Pnueli, Amir; Zuck, Lenore
Abstract:
We define the class of single-parent heap systems, which rely on a singly-linked heap in order to model destructive updates on tree structures. This encoding has the advantage of relying on a relatively simple theory of linked lists in order to support abstraction computation. To facilitate the application of this encoding, we provide a program transformation that, given a program operating on a multi-linked heap without sharing, transforms it into one over a single-parent heap. It is then possible to apply shape analysis by predicate and ranking abstraction as in [BPZ05]. The technique has been successfully applied on examples with trees of fixed arity (balancing of and insertion into a binary sort tree).
-
M.S. Thesis
2006
TimeIn: A temporal visualization for file access
Borden, Jeffrey
Abstract
|
PDF
Title: TimeIn: A temporal visualization for file access
Candidate: Borden, Jeffrey
Advisor(s): Shasha, Dennis
Abstract:
TimeIn seeks to unify a given set of file objects into a unified browsing experience providing mechanisms to Cluster visually similar objects and display objects in a timeline view from a local file system or flickr.com . To navigate this content, users are provided with a variety of mechanisms for filtering the set of objects presented.
For text based objects, TimeIn will analyze the content of the file and attempt to extract a set of descriptive phrases. For image based objects, TimeIn will annotate the object with the most frequently used colors of the image. Users have the option of augmenting these automatically generated tags by defining their own descriptive tags.
While providing novel features for browsing and searching content, TimeIn retains many of the original organizational features of existing systems. When content is imported from a hierarchical file system, users can still browse by the original hierarchical structure.
TimeIn also retains the PhotoSet structures associated with content imported from flickr.com . Users can also organize content into user-defined "albums" of objects. These albums can then be used to filter the set of objects on the timeline.
-
TR2006-883
2006
On Transductive Regression
Cortes, Corinna;
Mohri, Mehryar
Abstract
|
PDF
Title: On Transductive Regression
Author(s): Cortes, Corinna; Mohri, Mehryar
Abstract:
In many modern large-scale learning applications, the amount of unlabeled data far exceeds that of labeled data. A common instance of this problem is the 'transductive' setting where the unlabeled test points are known to the learning algorithm. This paper presents a study of regression problems in that setting. It presents 'explicit' VC-dimension error bounds for transductive regression that hold for all bounded loss functions and coincide with the tight classification bounds of Vapnik when applied to classification. It also presents a new transductive regression algorithm inspired by our bound that admits a primal and kernelized closed-form solution and deals efficiently with large amounts of unlabeled data. The algorithm exploits the position of unlabeled points to locally estimate their labels and then uses a global optimization to ensure robust predictions. Our study also includes the results of experiments with several publicly available regression data sets with up to 20,000 unlabeled examples. The comparison with other transductive regression algorithms shows that it performs well and that it can scale to large data sets.
-
Ph.D. Thesis
2006
Guaranteed Precision for Transcendental and Algebraic Computation Made Easy
Du, Zilin
Abstract
|
PDF
Title: Guaranteed Precision for Transcendental and Algebraic Computation Made Easy
Candidate: Du, Zilin
Advisor(s): Yap, Chee
Abstract:
Numerical non-robustness is a well-known phenomenon when implementing geometric algorithms. A general approach to achieve geometric robustness is Exact Geometric Computation (EGC). This dissertation explores the redesign and extension of Core Library, a C++ library which embraces the EGC approach. The contributions of this thesis are organized into three parts.
In the first part, we discuss the redesign of Core Library, especially the expression "Expr" and bigfloat "BigFloat" classes. Our new design emphasizes extensibility in a clean and modular way. The three facilities in "Expr", filter, root bound and bigfloat, are separated into independent modules. This allows new filters, root bounds and some bigfloat substitute to be plugged in. The key approximate evaluation and precision propagation algorithms have been greatly improved. A new bigfloat system based on MPFR and interval arithmetic has been incorporated. Our benchmark shows that the redesigned Core Library typically has 5-10 times speedup. We also provide tools to facilitate extensions of "Expr" to incorporate new type of nodes, especially transcendental nodes.
Although the Core Library was originally designed for algebraic applications, transcendental functions are needed in many applications. In the second part, we present a complete algorithm for absolute approximation of the general hypergeometric functions. It's complexity is also given. The extension of this algorithm to ``blackbox number'' is provided. A general hypergeometric function package based on our algorithm is implemented and integrated into the Core Library based on our new design.
Brent has shown that many elementary functions, such as $\exp, \log, \sin$, etc., can be efficiently computed using the Arithmetic-Geometric Mean (AGM) based algorithm. However, he only gave an asymptotic error analysis. The constants in the Big $O(\cdot)$ notation required for implementation are unknown. We provide a non-asymptotic error analysis of the AGM algorithm and the related algorithms for logarithm and exponential functions. These algorithms have been implemented and incorporated into the Core Library.
-
Ph.D. Thesis
2006
On Cryptographic Techniques for Digital Rights Management
Fazio, Nelly
Abstract
|
PDF
Title: On Cryptographic Techniques for Digital Rights Management
Candidate: Fazio, Nelly
Advisor(s): Dodis, Yevgeniy
Abstract:
With more and more content being produced, distributed, and ultimately rendered and consumed in digital form, devising effective Content Protection mechanisms and building satisfactory Digital Rights Management (DRM) systems have become top priorities for the Publishing and Entertaining Industries.
To help tackle this challenge, several cryptographic primitives and constructions have been proposed, including mechanisms to securely distribute data over a unidirectional insecure channel (Broadcast Encryption), schemes in which leakage of cryptographic keys can be traced back to the leaker (Traitor Tracing), and techniques to combine revocation and tracing capabilities (Trace-and-Revoke schemes).
In this thesis, we present several original constructions of the above primitives, which improve upon existing DRM-enabling cryptographic primitives along the following two directions:
- Widening their scope of applicability e.g., by considering models taking into accounts usability issues typical of the DRM setting; and
- Strengthening their security guarantees to higher levels that are standards, for example, in the case of stand-alone encryption.
Our results along the first line of work include the following:
- An efficient public-key broadcast encryption scheme, which allows mutually mistrusting content providers to leverage a common delivery infrastructure, and can cope with low-end, stateless receivers;
- A traitor tracing scheme with optimal transmission rate, in which encryption does not cause a blow-up in the size of the content, thus allowing for optimal utilization of the broadcast channel;
- A public-key tracing and revoking scheme that can deal with both server-side and client-side scalability issues, while preserving traceability.
As for the second direction, our contribution can be divided as follows:
- A forward-secure public-key broadcast encryption scheme, in which the unauthorized access resulting from cracking a user-key is constrained to a minimal time frame which is delimited, in the future, by the revocation mechanism, and in the past, by forward secrecy;
- A precise formalization of the notion of adaptive chosen-ciphertext security for public-key broadcast encryption schemes, along with a modular and efficient construction.
Overall, the cryptographic tools developed in this thesis provide more flexibility and more security than existing solutions, and thus offer a better match for the challenges of the DRM setting.
-
Ph.D. Thesis
2006
Finding Your Match: Techniques for Improving Sequence Alignment in DNA and RNA
Gill, Ofer Hirsch
Abstract
|
PDF
Title: Finding Your Match: Techniques for Improving Sequence Alignment in DNA and RNA
Candidate: Gill, Ofer Hirsch
Advisor(s): Mishra, Bud
Abstract:
In Bioinformatics, finding correlations between species allows us the better understand the important biological functions of those species and trace its evolution. This thesis considers sequence alignment, a method for obtaining these correlations. We improve upon sequence alignment tools designed for DNA with Plains, an algorithm than uses piecewise-linear gap functions and parameter-optimization to obtain correlations in remotely-related species pairs such as human and fugu using reasonable amounts of memory and space on an ordinary computer. We then discuss Planar, which is similar to Plains, but is designed for aligning RNA, and accounts for secondary structure. We also explore SEPA, a tool that uses p-value estimation based on exhaustive empirical data to better emphasize key results from an alignment with a measure of reliability. Using SEPA to measure the quality of an alignment, we proceed to compare Plains and Planar against similar alignment tools, emphaisizing the interesting correlations caught in the process.
-
M.S. Thesis
2006
Kronosphere: A temporal visualization for file access
Harrison, Chris
Abstract
|
PDF
Title: Kronosphere: A temporal visualization for file access
Candidate: Harrison, Chris
Advisor(s): Shasha, Dennis
Abstract:
Hierarchical file systems mirror the way people organize data in the real world. However, this method of organization is often inadequate in managing the immense number of files that populate hard drives. Kronosphere provides a novel time and content-based navigational paradigm for managing and accessing media. This allows users to browse their documents by time, content, history, metadata, and relationships with other files.
-
Ph.D. Thesis
2006
DataSlicer: A Hosting Platform for Data-Centric Network Services
He, Congchun
Abstract
|
PDF
Title: DataSlicer: A Hosting Platform for Data-Centric Network Services
Candidate: He, Congchun
Advisor(s): Karamcheti, Vijay
Abstract:
As the Web evolves, the number of network services deployed on the Internet has been growing at a dramatic pace. Such services usually involve a massive volume of data stored in physical or virtual back-end databases, and access the data to dynamically generate responses for client requests. These characteristics restrict use of traditional mechanisms for improving service performance and scalability: large volumes prevent replication of the service data at multiple sites required by content distribution schemes, while dynamic responses do not support the reuse required by web caching schemes.
However, many deployed data-centric network services share other properties that can help alleviate this situation: (1) service usage patterns exhibit locality of various forms, and (2) services are accessed using standard protocols and publicly known message structures. When properly exploited, these characteristics enable the design of alternative caching infrastructures, which leverage distributed network intermediaries to inspect traffic flowing between clients and services, infer locality information dynamically, and potentially improve service performance by taking actions such as partial service replication, request redirection, or admission control.
This dissertation investigates the nature of locality in service usage patterns for two well-known web services, and reports on the design, implementation, and evaluation of such a network intermediary architecture, named DataSlicer. DataSlicer incorporates four main techniques: (1) Service-neutral request inspection and locality detection on distributed network intermediaries; (2) Construction of oriented overlays for clustering client requests; (3)Integrated load-balancing and service replication mechanisms that improve service performance and scalability by either redistributing the underlying traffic in the network or creating partial service replicas on demand at appropriate network locations; and (4) Robustness mechanisms to maintain system stability in a wide-area network environment.
DataSlicer has been successfully deployed on the PlanetLab network. Extensive experiments using synthetic workloads show that our approach can: (1) create appropriate oriented overlays to cluster client requests according to multiple application metrics; (2) detect locality information across multiple dimensions and granularity levels; (3) leverage the detected locality information to perform appropriate load-balancing and service replication actions with minimal cost; and (4) ensure robust behavior in the face of dynamically changing network conditions.
-
Ph.D. Thesis
2006
Multimarker Genetic Analysis Methods for High Throughput Array Data
Ionita, Iuliana
Abstract
|
PDF
Title: Multimarker Genetic Analysis Methods for High Throughput Array Data
Candidate: Ionita, Iuliana
Advisor(s): Mishra, Bud
Abstract:
In this thesis, we focus on multi-marker/-locus statistical methods for analyzing high-throughput array data used for the detection of genes implicated in complex disorders. There are two main parts: the first part concerns the localization of cancer genes from copy number variation data, with an application to lung cancer; the second part concerns the localization of disease genes using an affected-sib-pair design, with an application to inflammatory bowel disease. A third part addresses an important issue involved in the design of these disease-gene-detection studies. More details follow:
1. Detection of Oncogenes and Tumor Suppressor Genes using Multipoint Statistics from Copy Number Variation Data
ArrayCGH is a microarray-based comparative genomic hybridization technique that has been used to compare a tumor genome against a normal genome, thus providing rapid genomic assays of tumor genomes in terms of copy number variations of those chromosomal segments, which have been gained or lost. When properly interpreted, these assays are likely to shed important light on genes and mechanisms involved in initiation and progression of cancer. Specifically, chromosomal segments, amplified or deleted in a group of cancer patients, point to locations of cancer genes. We describe a statistical method to estimate the location of such genes by analyzing segmental amplifications and deletions in the genomes from cancer patients and the spatial relation of these segments to any specific genomic interval. The algorithm assigns to a genomic segment a score that parsimoniously captures the underlying biology. It computes a p-value for every putative disease gene by using results from the theory of scan statistics. We have validated our method using simulated datasets, as well as a real dataset on lung cancer.
2. Multi-locus Linkage Analysis of Affected-Sib-Pairs
A The affected-sib-pair (ASP) design is a simple and popular design in the linkage analysis of complex traits. The traditional ASP methods evaluate the linkage information at a locus by considering only the marginal linkage information present at that locus. However complex traits are influenced by multiple genes that together interact to increase the risk to disease. We describe a multi-locus linkage method that uses both the marginal information and information derived from the possible interactions among several disease loci, thereby increasing the significance of loci with modest marginal effects. Our method is based on a statistic that quantifies the linkage information contained in a set of markers. By a marker selection-reduction process, we screen a set of polymorphisms and select a few that seem linked to disease. We test our approach on simulated data and a genome-scan data for inflammatory bowel disease. We show that our method is expected to be more powerful than single-locus methods in detecting disease loci responsible for complex traits.
3. A Practical Haplotype Inference Algorithm
We consider the problem of efficient inference algorithms to determine the haplotypes and their distribution from a dataset of unrelated genotypes.
With the currently available catalogue of single-nucleotide polymorphisms (SNPs) and given their abundance throughout the genome (one in about $500$ bps) and low mutation rates, scientists hope to significantly improve their ability to discover genetic variants associated with a particular complex trait. We present a solution to a key intermediate step by devising a practical algorithm that has the ability to infer the haplotype variants for a particular individual from its own genotype SNP data in relation to population data. The algorithm we present is simple to describe and implement; it makes no assumption such as perfect phylogeny or the availability of parental genomes (as in trio-studies); it exploits locality in linkages and low diversity in haplotype blocks to achieve a linear time complexity in the number of markers; it combines many of the advantageous properties and concepts of other existing statistical algorithms for this problem; and finally, it outperforms competing algorithms in computational complexity and accuracy, as demonstrated by the studies performed on real data and synthetic data. -
TR2006-882
2006
A FETI-DP algorithm for elasticity problems with mortar discretization on geometrically non-conforming partitions
Kim, Hyea Hyun
Abstract
|
PDF
Title: A FETI-DP algorithm for elasticity problems with mortar discretization on geometrically non-conforming partitions
Author(s): Kim, Hyea Hyun
Abstract:
In this paper, a FETI-DP formulation for three dimensional elasticity on non-matching grids over geometrically non-conforming subdomain partitions is considered. To resolve the nonconformity of the finite elements, a mortar matching condition is imposed on the subdomain interfaces (faces). A FETI-DP algorithm is then built by enforcing the mortar matching condition in dual and primal ways. In order to make the FETI-DP algorithm scalable, a set of primal constraints, which include average and momentum constraints over interfaces, are selected from the mortar matching condition. A condition number bound, $C(1+\text{log}(H/h))2$, is then proved for the FETI-DP formulation for the elasticity problems with discontinuous material parameters. Only some faces need to be chosen as primal faces on which the average and momentum constraints are imposed.
-
Ph.D. Thesis
2006
Expressive Motion
Lees, Alyssa
Abstract
|
PDF
Title: Expressive Motion
Candidate: Lees, Alyssa
Advisor(s): Bregler, Christopher; Geiger, Davi
Abstract:
Since the advent of motion capture animation, attempts have been made to extract the seemingly nebulously defined attributes of 'content' and 'style' from the motion data. Enabling quick access to highly precise data, the benefits of motion capture for animation purposes are abundant. Yet manipulating the expressive attributes of the motion data in a comprehensive manner has proved elusive. This dissertation poses practical solutions that are based on insights from the dance community and learning attributes from the motion data itself. The culminating project is a system which learns the deformations of the human body and reapplies them in exaggerated form for enhanced expressivity.
While simultaneously developing efficient and usable tools for animators, the result is a three pronged technique to enhance the expressive qualities of motion capture animation. The key aspect is the creation of a deformable skeleton representation of the human body using a unique machine learning approach. The deformable skeleton is modeled by replicating the actual movements of the human spine. The second step relies on exploiting the subtle aspects of motion, such as hand movement to create an emotional effect visually. Both of these approaches involve exaggerating the movements in the same vein as traditional 2-D animation technique of 'squash and stretch'. Finally, a novel technique for the application of style on a baseline motion capture sequence is developed.
All of these approaches are rooted in machine learning techniques. Linear discriminate analysis was initially applied to a single phrase of motion demonstrating various style characteristics in LABAN notation. A variety of methods including nonlinear PCA, and LLE were used to learn the underlying manifold of spine movements. Nonlinear dynamic models were learned in attempts to describe motion segments versus single phrases. In addition, the dissertation focuses on the variety of obstacles in learning with motion data. This includes the correct parameterization of angles, applying statistical analysis to quaternions, and appropriate distance measures between postures.
-
Ph.D. Thesis
2006
Building Trustworthy Storage Services out of Untrusted Infrastructure
Li, Jinyuan
Abstract
|
PDF
Title: Building Trustworthy Storage Services out of Untrusted Infrastructure
Candidate: Li, Jinyuan
Advisor(s): Mazieres, David
Abstract:
As the Internet has become increasingly ubiquitous, it has seen tremendous growth in the popularity of online services. These services range from online CVS repositories like sourceforge , shopping sites, to online financial and administrative systems, etc. It is critical for these services to provide correct and reliable execution for clients. However, given their attractiveness as targets and ubiquitous accessibility, online servers also have a significant chance of being compromised, leading to Byzantine failures.
Designing and implementing a service to run on a machine that may be compromised is not an easy task, since infrastructure under malicious control may behave arbitrarily. Even worse, as any monitoring facility may also be subverted at the same time, there is no easy way for system behavior to be audited, or for malicious attacks to be detected.
We propose our solution to the problem by reducing the trust needed on the server side in the first place. In the other words, our system is designed specifically for running on untrusted hosts. In this thesis, we realize this principle by two different approaches. First, we design and implement a new network file system -- SUNDR. In SUNDR, malicious servers cannot forge users' operations or tamper with their data without being detected. In the worst case, attackers can only conceal users' operations from each other. Still, SUNDR is able to detect this misbehavior whenever users communicate with each other directly.
The limitation of the approach above lies in that the system cannot guarantee ideal consistency with even one single failure. In the second approach, we use replicated state machines to tolerate some fraction of malicious server failures, which is termed Byzantine Fault Tolerance (BFT) in the literature. Classical BFT systems assume less than 1/3 of the replicas are malicious, to provide ideal consistency. In this thesis, we push the boundary from 1/3 to 2/3. With fewer than 1/3 of replicas faulty, we provide the same guarantees as classical BFT systems. Additionally, we guarantee weaker consistency, instead of arbitrary behavior, when between 1/3 and 1/3 of replicas fail.
-
Ph.D. Thesis
2006
Measures for Robust Stability and Controllability
Mengi, Emre
Abstract
|
PDF
Title: Measures for Robust Stability and Controllability
Candidate: Mengi, Emre
Advisor(s): Overton, Michael
Abstract:
A linear time-invariant dynamical system is robustly stable if the system as well as all of its nearby systems in a neighborhood of interest are stable. An important property of robustly stable systems is they decay asymptotically without exhibiting significant transient behavior. The first part of this thesis work focuses on measures revealing the degree of robust stability of a dynamical system. We put special emphasis on pseudospectral measures, those based on the eigenvalues of nearby matrices for a first-order system or matrix polynomials for a higher-order system. We present algorithms for the computation of pseudospectral measures for continuous and discrete systems with quadratic rate of convergence and analyze their accuracy in the presence of rounding errors. We also provide an efficient algorithm for the numerical radius of a matrix, the modulus of the outermost point in the field of values (the set of Rayleigh quotients) of the matrix. These algorithms are inspired by algorithms of Byers, Boyd-Balakrishnan and Burke-Lewis-Overton.
The second part is devoted to indicators of robust controllability. We call a system robustly controllable if it is controllable and remains controllable under perturbations of interest. We describe efficient methods for the computation of the distance to the closest uncontrollable system. Our first algorithm for the first-order distance to uncontrollability depends on a grid and is well-suited for low precision approximation. We then discuss algorithms for high precision approximation of the first-order distance to uncontrollability. These are based on the bisection method of Gu and the trisection variant of Burke-Lewis-Overton.
These algorithms require the extraction of the real eigenvalues of matrices of size $O(n2)$ typically at a cost of $O(n6)$, where $n$ is the dimension of the state space. We propose a new divide-and-conquer algorithm that reduces the cost to $O(n4)$ on average in both theory and practice and $O(n5)$ in the worst case. The new iterative approach to the extraction of real eigenvalues may also be useful in other contexts. For higher-order systems we derive a singular value characterization and exploit this characterization for the computation of the higher-order distance to uncontrollability to low precision. The algorithms in this thesis assume arbitrary complex perturbations are applicable to the input system and usually require the extraction of the imaginary eigenvalues of Hamiltonian matrices (or even matrix polynomials) or the unit eigenvalues of symplectic pencils (or palindromic matrix polynomials).
-
Ph.D. Thesis
2006
Algorithmic Algebraic Model Checking: Hybrid Automata & Systems Biology
Mysore, Venkatesh Pranesh
Abstract
|
PDF
Title: Algorithmic Algebraic Model Checking: Hybrid Automata & Systems Biology
Candidate: Mysore, Venkatesh Pranesh
Advisor(s): Mishra, Bud
Abstract:
Systems Biology strives to hasten our understanding of the fundamental principles of life by adopting a systems-level approach for the analysis of cellular function and behavior. One popular framework for capturing the chemical kinetics of interacting biochemicals is Hybrid Automata. Our goal in this thesis is to aid Systems Biology research by improving the current understanding of hybrid automata, by developing techniques for symbolic rather than numerical analysis of the dynamics of biochemical networks modeled as hybrid automata, and by honing the theory to two classes of problems: kinetic mass action based simulation in genetic regulatory & signal transduction pathways, and pseudo-equilibrium simulation in metabolic networks.
We first provide new constructions that prove that the "open" Hierarchical Piecewise Constant Derivative (HPCD) subclass is closer to the decidability and undecidability frontiers than was previously understood. After concluding that the HPCD-like classes are unsuitable for modeling chemical reactions, our quest for semi-decidable subclasses leads us to define the "semi-algebraic" subclass. This is the most expressive hybrid automaton subclass amenable to rigorous symbolic temporal reasoning. We begin with the bounded reachability problem, and then show how the dense-time temporal logic Timed Computation Tree Logic (TCTL) can be model-checked by exploiting techniques from real algebraic geometry, primarily real quantifier elimination. We also prove the undecidability of reachability in the Blum-Shub-Smale Turing Machine formalism. We then develop efficient approximation strategies by extending bisimulation partitioning, rectangular grid-based approximation, polytopal approximation and time discretization. We then develop a uniform algebraic framework for modeling biochemical and metabolic networks, also extending flux balance analysis. We present some preliminary results using a prototypical tool Tolque. It is a symbolic algebraic dense time model-checker for semi-algebraic hybrid automata, which uses Qepcad for quantifier elimination.
The "Algorithmic Algebraic Model Checking" techniques developed in this thesis present a theoretically-grounded mathematically-sound platform for powerful symbolic temporal reasoning over biochemical networks and other semi-algebraic hybrid automata. It is our hope that by building upon this thesis, along with the development of computationally efficient parallelizable quantifier elimination algorithms and the integration of different computer algebra tools, scientific software systems will emerge that fundamentally transform the way biochemical networks (and other hybrid automata) are analyzed.
-
Ph.D. Thesis
2006
Building an Automatic Phenotyping System of Developing Embryos
Ning, Feng
Abstract
|
PDF
Title: Building an Automatic Phenotyping System of Developing Embryos
Candidate: Ning, Feng
Advisor(s): LeCun, Yann
Abstract:
This dissertation presents a learning-based system for the detection, identification, localization, and measurement of various sub-cellular structures in microscopic images of developing embryos. The system analyzes sequences of images obtained through DIC microscopy and detects cell nuclei, cytoplasm, and cell walls automatically. The system described in this dissertation is the key initial component of a fully automated phenotype analysis system.
Our study primarily concerns the early stages of development of C. Elegans nematode embryos, from fertilization to the four-cell stage. The method proposed in this dissertation consists in learning the entire processing chain {\em from end to end}, from raw pixels to ultimate object categories.
The system contains three modules: (1) a convolutional network trained to classify each pixel into five categories: cell wall, cytoplasm, nuclear membrane, nucleus, outside medium; (2) an Energy-Based Model which cleans up the output of the convolutional network by learning local consistency constraints that must be satisfied by label images; (3) A set of elastic models of the embryo at various stages of development that are matched to the label images.
When observing normal (wild type) embryos it is possible to visualize important cellular functions such as nuclear movements and fusions, cytokinesis and the setting up of crucial cell-cell contacts. These events are highly reproducible from embryo to embryo. The events will deviate from normal behaviors when the function of a specific gene is perturbed, therefore allowing the detection of correlations between genes activities and specific early embryonic events. One important goal of the system is to automatically detect whether the development is normal (and therefore, not particularly interesting), or abnormal and worth investigating. Another important goal is to automatically extract quantitative measurements such as the migration speed of the nuclei and the precise time of cell divisions.
-
Ph.D. Thesis
2006
A Polymorphic Type System and Compilation Scheme for Record Concatenation
Osinski, Edward
Abstract
|
PDF
Title: A Polymorphic Type System and Compilation Scheme for Record Concatenation
Candidate: Osinski, Edward
Advisor(s): Goldberg, Benjamin
Abstract:
The notion of records, which are used to organize closely related groups of data so the group can be treated as a unit, and also provide access to the data within by name, is almost universally supported in programming languages. However, in virtually all cases, the operations permitted on records in statically typed languages are extremely limited. Providing greater flexibility in dealing with records, while simultaneously retaining the benefits of static type checking is a desirable goal.
This problem has generated considerable interest, and a number of type systems dealing with records have appeared in the literature. In this work, we present the first polymorphic type system that is expressive enough to type a number of complex operations on records, including three forms of concatenation and natural join. In addition, the precise types of the records involved are inferred, to eliminate the burden of explicit type declarations. Another aspect of this problem is an efficient implementation of records and their associated operations. We also present a compilation method which accomplishes this goal.
-
Ph.D. Thesis
2006
A Probabilistic Learning Approach to Attribute Value Inconsistency Resolution
Pevzner, Ilya
Abstract
|
PDF
Title: A Probabilistic Learning Approach to Attribute Value Inconsistency Resolution
Candidate: Pevzner, Ilya
Advisor(s): Goldberg, Arthur
Abstract:
Resolving inconsistencies in data is a problem of critical practical importance. Inconsistent data arises whenever an attribute takes on multiple, inconsistent, values. This may occur when a particular entity is stored multiple times in one database, or in multiple databases that are combined.
We investigate Attribute Value Inconsistency Resolution (AVIR), the problem of semi-automatically resolving data inconsistencies among multiple database records that describe the same person or thing.
Our survey of the area shows that existing solutions are either limited in scope or impose a significant burden on their users. Either they do not cover all types of inconsistencies and attributes, or they require users to write or choose attribute resolution functions for each potentially conflicting attribute.
Our ML based approach applies to all types of inconsistencies and attributes, and automatically selects appropriate resolution functions based on the conflicting data. We have invented and developed a system, that uses a set of binary features that detect data properties and relationships and resolution functions that merge data. Many such features and resolution functions have been written. The system uses supervised learning with maximum likelihood estimation to determine which function(s) to apply, based on which feature(s) fire.
We have validated our system by comparing its error rate, decision rate and decision accuracy on a test data set to baseline values determined by a clairvoyant application of a standard approach where each potentially conflicting attribute is resolved by the best resolution function for the attribute.
-
TR2006-881
2006
PSL Model Checking and Run-time Verification via Testers
Pnueli, Amir;
Zaks, Aleksandr
Abstract
|
PDF
Title: PSL Model Checking and Run-time Verification via Testers
Author(s): Pnueli, Amir; Zaks, Aleksandr
Abstract:
The paper introduces the construct of \emm{temporal testers} as a compositional basis for the construction of automata corresponding to temporal formulas in the PSL logic. Temporal testers can be viewed as (non-deterministic) transducers that, at any point, output a boolean value which is 1 iff the corresponding temporal formula holds starting at the current position.
The main advantage of testers, compared to acceptors (such as Buchi automata) is that they are compositional. Namely, a tester for a compound formula can be constructed out of the testers for its sub-formulas. In this paper, we extend the application of the testers method from LTL to the logic PSL.
Besides providing the construction of testers for PSL, we indicate how the symbolic representation of the testers can be directly utilized for efficient model checking and run-time monitoring.
-
Ph.D. Thesis
2006
Animating Autonomous Pedestrians
Shao, Wei
Abstract
|
PDF
Title: Animating Autonomous Pedestrians
Candidate: Shao, Wei
Advisor(s): Terzopoulos, Demetri
Abstract:
This thesis addresses the difficult open problem in computer graphics of autonomous human modeling and animation, specifically of emulating the rich complexity of real pedestrians in urban environments.
We pursue an artificial life approach that integrates motor, perceptual, behavioral, and cognitive components within a model of pedestrians as highly capable individuals. Our comprehensive model features innovations in these components, as well as in their combination, yielding results of unprecedented fidelity and complexity for fully autonomous multi-human simulation in large urban environments. Our pedestrian model is entirely autonomous and requires no centralized, global control whatsoever.
To animate a variety of natural interactions between numerous pedestrians and their environment, we represent the environment using hierarchical data structures, which efficiently support the perceptual queries of the autonomous pedestrians that drive their behavioral responses and sustain their ability to plan their actions on local and global scales.
The animation system that we implement using the above models enables us to run long-term simulations of pedestrians in large urban environments without manual intervention. Real-time simulation can be achieved for well over a thousand autonomous pedestrians. With each pedestrian under his/her own autonomous control, the self-animated characters imbue the virtual world with liveliness, social (dis)order, and a realistically complex dynamic.
We demonstrate the automated animation of human activity in a virtual train station, and we employ our pedestrian simulator in the context of virtual archaeology for visualizing urban social life in reconstructed archaeological sites. Our pedestrian simulator is also serving as the basis of a testbed for designing and experimenting with visual sensor networks in the field of computer vision.
-
Ph.D. Thesis
2006
Complexity Analysis of Algorithms in Algebraic Computation
Sharma, Vikram
Abstract
|
PDF
Title: Complexity Analysis of Algorithms in Algebraic Computation
Candidate: Sharma, Vikram
Advisor(s): Yap, Chee
Abstract:
Numerical computations with real algebraic numbers require algorithms for approximating and isolating real roots of polynomials. A classical choice for root approximation is Newton's method. For an analytic function on a Banach space, Smale introduced the concept of approximate zeros, i.e., points from which Newton's method for the function converges quadratically. To identify these approximate zeros he gave computationally verifiable convergence criteria called point estimates. However, in developing these results Smale assumed that Newton's method is computed exactly. For a system of $n$ homogeneous polynomials in $n+1$ variables, Malajovich developed point estimates for a different definition of approximate zero, assuming that all operations in Newton's method are computed with fixed precision. In the first half of this dissertation, we develop point estimates for these two different definitions of approximate zeros of an analytic function on a Banach space, but assume the strong bigfloat computational model of Brent, i.e., where all operations involve bigfloats with varying precision. In this model, we derive a uniform complexity bound for approximating a root of a zero-dimensional system of $n$ integer polynomials in $n$ variables. We also derive a non-asymptotic bound, in terms of the condition number of the system, on the precision required to implement the robust Newton method.
The second part of the dissertation analyses the worst-case complexity of two algorithms for isolating real roots of a square-free polynomial with real coefficients: The Descartes method and Akritas' continued fractions algorithm. The analysis of both algorithms is based upon amortization bounds such as the Davenport-Mahler bound. For the Descartes method, we give a unified framework that encompasses both the power basis and the Bernstein basis variant of the method; we derive an $O(n(L+\log n))$ bound on the size of the recursion tree obtained by applying the method to a square-free polynomial of degree n with integer coefficients of bit-length $L$, the bound is tight for $L=\Omega(\log n)$; based upon this result we readily obtain the best known bit-complexity bound of $\wt{O}(n^4L2) $ for the Descartes method, where $\wt{O}$ means we ignore logarithmic factors. Similar worst case bounds on the bit-complexity of Akritas' algorithm were not known in the literature. We provide the first such bound, $\wt{O}(n^{12}L3)$, for a square-free integer polynomial of degree $n$ and coefficients of bit-length $L$.
-
Ph.D. Thesis
2006
Pairwise Comparison between Genomic Sequences and Optical-Maps
Sun, Bing
Abstract
|
PDF
Title: Pairwise Comparison between Genomic Sequences and Optical-Maps
Candidate: Sun, Bing
Advisor(s): Mishra, Bud
Abstract:
With the development and improvement of high throughput experimental technologies, massive amount of biological data including genomic sequences and optical-maps have been collected for various species. Comparative techniques play a central role in investigating the adaptive significance of organismal traits and revealing evolutionary relations among organisms by comparing these biological data. This dissertation presents two efficient comparative analysis tools used in comparative genomics and comparative optical-map study, respectively.
A complete genome sequence of an organism can be viewed as its ultimate genetic map, in the sense that the heritable information are encoded within the DNA and the order of nucleotides along chromosomes is known. Comparative genomics can be applied to find functional sites by comparing genetic maps. Comparing vertebrate genomes requires efficient cross-species sequence alignment programs. The first tool introduced in this thesis is COMBAT (Clean Ordered Mer-Based Alignment Tool), a new mer-based method which can search rapidly for highly similar translated genomic sequences using the stable-marriage algorithm (SM) as an alignment filter. In experiments COMBAT is applied to comparative analysis between yeast genomes, and between the human genome and the recently published bovine genome. The homologous blocks identified by COMBAT are comparable with the alignments produced by BLASTP and BLASTZ.
When genetic maps are not available, other genomic maps, including optical-maps, can be constructed. An optical map is an ordered enumeration of the restriction sites along with the estimated lengths of the restriction fragments between consecutive restriction sites. CAPO (Comparative Analysis and Phylogeny with Optical-Maps), introduced as a second technique in this thesis, is a tool for inferring phylogeny based on pairwise optical map comparison and bipartite graph matching. CAPO combines the stable matching algorithm with either the Unweighted Pair Group Method with Arithmetic Averaging (UPGMA) or the Neighbor-Joining (NJ) method for constructing phylogenetic trees. This new algorithm is capable of constructing phylogenetic trees in logarithmic steps and performs well in practice. Using optical maps constructed in silico and in vivo, our work shows that both UPGMA-flavored trees and the NJ-flavored trees produced by CAPO share substantial overlapping tree topology and are biologically meaningful.
-
Ph.D. Thesis
2006
Exploiting Service Usage Information for Optimizing Server Resource Management
Totok, Alexander
Abstract
|
PDF
Title: Exploiting Service Usage Information for Optimizing Server Resource Management
Candidate: Totok, Alexander
Advisor(s): Karamcheti, Vijay
Abstract:
It is difficult to provision and manage modern component-based Internet services so that they provide stable quality-of-service (QoS) guarantees to their clients, because: (1) component middleware are complex software systems that expose several independently tuned configurable application runtime policies and server resource management mechanisms; (2) session-oriented client behavior with complex data access patterns makes it hard to predict what impact tuning these policies and mechanisms has on application behavior; (3) component-based Internet services exhibit complex structural organization with requests of different types accessing different components and data sources, which could be distributed and/or replicated for failover, performance, or business purposes.
This dissertation attempts to alleviate this situation by targeting three interconnected goals: (1) providing improved QoS guarantees to the service clients, (2) optimizing server resource utilization, and (3) providing application developers with guidelines for natural application structuring, which enable efficient use of the proposed mechanisms for improving service performance. Specifically, we explore the thesis that exposing and using detailed information about how clients use component-based Internet services enables mechanisms that achieve the range of goals listed above. To validate this thesis we show its applicability to the following four problems: (1) maximizing reward brought by Internet services, (2) optimizing utilization of server resource pools, (3) providing session data integrity guarantees, and (4) enabling service distribution in wide-area environments.
The techniques that we propose for the identified problems are applicable at both the application structuring stage and the application operation stage, and range from automatic (i.e., performed by middleware in real time) to manual (i.e., involve the programmer, or the service provider). These techniques take into account service usage information exposed at different levels, ranging from high-level structure of user sessions to low level information about data access patterns and resource utilization by requests of different types. To show the benefits of the proposed techniques, we implement various middleware mechanisms in the JBoss application server, which utilizes the J2EE component model, and comprehensively evaluate them on several publicly-available sample J2EE applications - Java Pet Store, RUBiS, and our own implementation of the TPC-W web transactional benchmark. Our experimental results show that the proposed techniques achieve optimal utilization of server resources and improve application performance by up to two times for centralized Internet services and by up to 6 times for distributed ones.
-
Ph.D. Thesis
2006
Time Series Matching: A Multi-Filter Approach
Wang, Zhihua
Abstract
|
PDF
Title: Time Series Matching: A Multi-Filter Approach
Candidate: Wang, Zhihua
Advisor(s): Shasha, Dennis
Abstract:
Data arriving in time order (time series) arises in disciplines ranging from music to meteorology to finance to motion capture data, to name a few. In many cases, a natural way to query the data is what we call time series matching - a user enters a time series by hand, keyboard or voice and the system finds "similar" time series.
Existing time series similarity measures, such as DTW (Dynamic Time Warping), can accommodate certain timing errors in the query and perform with high accuracy on small databases. However, they all have high computational complexity and the accuracy dramatically drops when the data set grows. More importantly, there are types of errors that cannot be captured by a single similarity measure.
Here we present a general time series matching framework. This framework can easily optimize, combine and test different features to execute a fast similarity search based on the application's requirement. Basically we use a multi-filter chain and boosting algorithms to compose a ranking algorithm. Each filter is a classifier which removes bad candidates by comparing certain features of the time series data. Some filters use a boosting algorithm to combine a few different weak classifiers into a strong classifier. The final filter will give a ranked list of candidates in the reference data which matches the query data.
The framework is applied to build query algorithms for a Query-by-Humming system. Experiments show that the algorithm has a more accurate similarity measure and its response time increases much slower than the pure DTW algorithm when the number of songs in the database increases from 60 to 1400.
-
Ph.D. Thesis
2006
Incremental Web Search: Tracking Changes in the Web
Wang, Ziyang
Abstract
|
PDF
Title: Incremental Web Search: Tracking Changes in the Web
Candidate: Wang, Ziyang
Advisor(s): Davis, Ernest
Abstract:
A large amount of new information is posted on the Web every day. Large-scale web search engines often update their index slowly and are unable to present such information in a timely manner. Here we present our solutions of searching new information from the web by tracking the changes of web documents.
First, we present the algorithms and techniques useful for solving the following problems: detecting web pages that have changed, extracting changes from different versions of a web page, and evaluating the significance of web changes. We propose a two-level change detector: MetaDetector and ContentDetector. The combined detector successfully reduces network traffic by about 67%. Our algorithm for extracting web changes consists of three steps: document tree construction, document tree encoding and tree matching. It has linear time complexity and extracts effectively the changed content from different versions of a web page. In order to evaluate web changes, we propose a unified ranking framework combining three metrics: popularity ranking, content-based ranking and evolution ranking. Our methods can identify and deliver important new information in a timely manner.
Second, we present an application using the techniques and algorithms we developed, named "Web Daily News Assistant (WebDNA): finding what's new on Your Web". It is a search tool that helps community users search new information on their community web. Currently WebDNA is deployed on the New York University web site.
Third, we model the changes of web documents using survival analysis. Modeling web changes is useful for web crawler scheduling and web caching. Currently people model changes to web pages as a Poisson Process, and use a necessarily incomplete detection history to estimate the true frequencies of changes. However, other features that can be used to predict change frequency have not previously been studied. Our analysis shows that PageRank value is a good predictor. Statistically, the change frequency is a function proportional to $\exp[0.36\cdot (\ln(PageRank)+C)]$. We further study the problem of combining the predictor and change history into a unified framework. An improved estimator of change frequency is presented, which successfully reduces the error by 27.3% when the change history is short.
-
Ph.D. Thesis
2006
Fast Algorithms for Burst Detection
Zhang, Xin
Abstract
|
PDF
Title: Fast Algorithms for Burst Detection
Candidate: Zhang, Xin
Advisor(s): Shasha, Dennis
Abstract:
Events occur in every aspect of our lives.
An unexpectedly large number of events occurring within some certain measurement (e.g. within some time duration or a spatial region) is called a {\em burst}, suggesting unusual behaviors or activities. Bursts come up in many natural and social processes. It is a challenging task to monitor the occurrence of bursts whose lasting duration is unknown in a fast data stream environment.
This work describes efficient data structures and algorithms for high performance burst detection under different settings. Our view is that bursts, as an unusual phenomenon, constitute a useful preliminary primitive in a knowledge discovery hierarchy. Our intent is to build a high performance primitive detection algorithm to support high-level data mining tasks.
The work starts with an algorithmic framework including a family of data structures and a heuristic optimization algorithm to choose an efficient data structure given the inputs. The advantage of this framework is that it's adaptive to different inputs. Experiments on both synthetic data and real world data show the new framework significantly outperforms existing techniques over a variety of inputs.
Furthermore, we present a greedy dynamic detection algorithm which handles the changing data. It evolves the structure to adapt to the incoming data. It achieves better performance in both synthetic and real data streams than a static algorithm in most cases.
We have applied this framework to different real world applications in physics, stock trading and website traffic monitoring. All the case studies show our framework has superb performance.
We extend this framework to multi-dimensional data and use it in an epidemiology simulation to detect infectious disease outbreak and spread.
-
Ph.D. Thesis
2006
High Performance Algorithms for Multiple Streaming Time Series
Zhao, Xiaojian
Abstract
|
PDF
Title: High Performance Algorithms for Multiple Streaming Time Series
Candidate: Zhao, Xiaojian
Advisor(s): Shasha, Dennis
Abstract:
Data arriving in time order (a data stream) arises in fields ranging from physics to finance to medicine to music, to name a few. Often the data comes from sensors (in physics and medicine for example) whose data rates continue to improve dramatically as sensor technology improves. Furthermore, the number of sensors is increasing, so analyzing data between sensors becomes ever more critical in order to distill knowledge from the data. Fast response is desirable in many applications (e.g. to aim a telescope at an activity of interest or to perform a stock trade). In applications such as finance, recent information, e.g. correlation, is of far more interest than older information, so analysis over sliding windows is a desired operation.
These three factors -- huge data size, fast response, and windowed computation -- motivated this work. Our intent is to build a foundational library of primitives to perform online or near online statistical analysis, e.g. windowed correlation, incremental matching pursuit, burst detection, on thousands or even millions of time series. Beside the algorithms, we also propose the concept of ``uncooperative'' time series, whose power spectra are spread over all frequencies with any regularity.
Previous work showed how to do windowed correlation with Fast Fourier Transforms and Wavelet Transforms, but such techniques don't work for uncooperative time series. This thesis will show how to use sketches (random projections) in a way that combines several simple techniques -- sketches, convolution, structured random vectors, grid structures, combinatorial design, and bootstrapping -- to achieve high performance, windowed correlation over a variety of data sets. Experiments confirm the asymptotic analysis.
To conduct matching pursuit (MP) over time series windows, an incremental scheme is designed to reduce the computational effort. Our empirical study demonstrates a substantial improvement in speed.
In previous work, Zhu and Shasha introduced an efficient algorithm to monitor bursts within windows of multiple sizes. We implemented it in a physical system by overcoming several practical challenges. Experimental results support the authors' linear running time analysis.
-
Ph.D. Thesis
2006
Distribution of Route-Impacting Control Information in a Publish/Subscribe System with Delivery Guarantees
Zhao, Yuanyuan
Abstract
|
PDF
Title: Distribution of Route-Impacting Control Information in a Publish/Subscribe System with Delivery Guarantees
Candidate: Zhao, Yuanyuan
Advisor(s): Kedem, Zvi
Abstract:
Event-driven middleware is a popular infrastructure for building large-scale asynchronous distributed systems. Content-based publish/subscribe systems are a type of event-driven middleware that provides service flexibility and specification expressiveness, creating opportunities for improving reliability and efficiency of the system.
The use of route-impacting control information, such as subscription filters and access control rules, has the potential to enable efficient routing for applications that require selective and regional distribution of events. Such applications range from financial information systems to sensor networks to service-oriented architectures. However, it has been a great challenge to design correct and efficient protocols for distributing control information and exploiting it to achieve efficient and highly available message routing.
In this dissertation, we study the problem of distributing and utilizing route-impacting control information. We present an abstract model of content-based routing and reliable delivery in redundant broker networks. Based on this model, we design a generic algorithm that propagates control information and performs content-based routing and delivers events reliably. The algorithm is efficient and light-weight in that it does not require heavy-weight consensus protocols between redundant brokers. We extend this generic algorithm to support consolidation and merging of control information. Existing protocols can be viewed as particular encodings and optimizations of the generic algorithm. We show an encoding using virtual time vectors that supports reliable delivery and deterministic dynamic access control in redundant broker networks. In our system, the semantics of reliable delivery is clearly defined even if subscription information and access control policy can dynamically change. That is, one or more subscribers of same principal will receive exactly the same sequence of messages (modulo subscription filter differences) regardless of where they are connected and the network latency and failure conditions in their parts of the network.
We have implemented these protocols in a fully-functioning content-based publish/subscribe system - Gryphon. We evaluate its efficiency, scalability and high availability.
-
TR2005-867
2005
Infrastructure for Automatic Dynamic Deployment of J2EE Applications in Distributed Environments
Akkerman, Anatoly;
Totok, Alexander; Karamcheti, Vijay
Abstract
|
PDF
Title: Infrastructure for Automatic Dynamic Deployment of J2EE Applications in Distributed Environments
Author(s): Akkerman, Anatoly; Totok, Alexander; Karamcheti, Vijay
Abstract:
Recent studies showed potential for using component frameworks for building flexible adaptible applications for deployment in distributed environments. However this approach is hindered by the complexity of deployment of component-based applications, which usually involves a great deal of configuration of both the application components and system services they depend on. In this paper we propose an infrastructure for automatic dynamic deployment of J2EE applications,that specifically addresses the problems of (1) inter-component connectivity specification and its effects on component configuration and deployment; and (2) application component dependencies on application server services, their configuration and deployment. The proposed infrastructure provides simple yet expressive abstractions for potential application adaptation through dynamic deployment and undeployment of components. We implement the infrastructure as a part of the JBoss J2EE application server and test it on several sample J2EE applications.
-
TR2005-858
2005
Remembrance of Experiments Past: Analyzing Time Course Datasets to Discover Complex Temporal Invariants
Antoniotti, Marco;
Ramakrishnan, Naren; Kumar, Deept; Spivak, Marina; Mishra, Bud
Abstract
|
PDF
Title: Remembrance of Experiments Past: Analyzing Time Course Datasets to Discover Complex Temporal Invariants
Author(s): Antoniotti, Marco; Ramakrishnan, Naren; Kumar, Deept; Spivak, Marina; Mishra, Bud
Abstract:
Motivation: Current microarray data analysis techniques draw the biologist's attention to targeted sets of genes but do not otherwise present global and dynamic perspectives (e.g., invariants) inferred collectively over a dataset. Such perspectives are important in order to obtain a process-level understanding of the underlying cellular machinery, especially how cells react, respond, and recover from stresses.
Results: We present GOALIE, a novel computational approach and software system that uncovers formal temporal logic models of biological processes from time course microarray datasets. GOALIE `redescribes' data into the vocabulary of biological processes and then pieces together these redescriptions into a Kripke-structure model, where possible worlds encode transcriptional states and are connected to future possible worlds. This model then supports various query, inference, and comparative assessment tasks, besides providing descriptive process-level summaries. An application of GOALIE to characterizing the yeast (S. cerevisiae) cell cycle is described.
Availability: GOALIE runs on Windows XP platforms and is available on request from the authors.
-
TR2005-878
2005
An Abstract Decision Procedure for Satisfiability in the Theory of Recursive Data Types
Barrett, Clark;
Shikanian, Igor; Tinelli, Cesare
Abstract
|
PDF
Title: An Abstract Decision Procedure for Satisfiability in the Theory of Recursive Data Types
Author(s): Barrett, Clark; Shikanian, Igor; Tinelli, Cesare
Abstract:
The theory of recursive data types is a valuable modeling tool for software verification. In the past, decision procedures have been proposed for both the full theory and its universal fragment. However, previous work has been limited in various ways, including an inability to deal with multiple constructors, multi-sorted logic, and mutually recursive data types. More significantly, previous algorithms for the universal case have been based on inefficient nondeterministic guesses and have been described in fairly complex procedural terms.
We present an algorithm which addresses these issues for the universal theory. The algorithm is presented declaratively as a set of abstract rules which are terminating, sound, and complete. We also describe strategies for applying the rules and explain why our recommended strategy is more efficient than those used by previous algorithms. Finally, we discuss how the algorithm can be used within a broader framework of cooperating decision procedures.
-
TR2005-869
2005
Squidball: An Experiment in Large-Scale Motion Capture and Game Design
Bregler, Christoph;
Castiglia, Clothilde; DeVincenzo, Jessica; DuBois, Roger Luke; Feeley, Kevin; Igoe, Tom; Meyer, Jonathan; Naimark, Michael; Postelnicu, Alexandru; Rabinovich, Michael; Rosenthal, Sally; Salen, Katie; Sudol, Jeremi; Wright, Bo
Abstract
|
PDF
Title: Squidball: An Experiment in Large-Scale Motion Capture and Game Design
Author(s): Bregler, Christoph; Castiglia, Clothilde; DeVincenzo, Jessica; DuBois, Roger Luke; Feeley, Kevin; Igoe, Tom; Meyer, Jonathan; Naimark, Michael; Postelnicu, Alexandru; Rabinovich, Michael; Rosenthal, Sally; Salen, Katie; Sudol, Jeremi; Wright, Bo
Abstract:
This paper describes a new large-scale motion capture based game that is called Squidball. It was tested on up to 4000 player audiences last summer at SIGGRAPH 2004. It required to build the world's largest motion capture space, the largest motion capture markers (balls), and many other challenges in technology, production, game play, and social studies. Our aim was to entertain the SIGGRAPH Electronic Theater audience with a cooperative and energetic game that is played by everybody together, in controlling real-time graphics and audio, while bouncing and batting multiple large helium filled balloons across the entire theater space. We detail in this paper all the lessons learned in producing such a system and game, and argue why we believe Squidball was a great success.
-
TR2005-860
2005
A Domain Decomposition Discretization of Parabolic Problems
Dryja, Maksymilian;
Tu, Xuemin
Abstract
|
PDF
Title: A Domain Decomposition Discretization of Parabolic Problems
Author(s): Dryja, Maksymilian; Tu, Xuemin
Abstract:
In recent years, domain decomposition methods have attracted much attention due to their successful application to many elliptic and parabolic problems. Domain decomposition methods treat problems based on a domain substructuring, which is attractive for parallel computation, due to the independence among the subdomains. In principle, domain decomposition methods may be applied to the system resulting from a standard discretization of the parabolic problems or, directly, be carried out through a direct discretization of parabolic problems. In this paper, a direct domain decomposition method is introduced to discretize the parabolic problems. The stability and convergence of this algorithm are analyzed, and an $O(\tau+h)$ error bound is provided.
-
Ph.D. Thesis
2005
Translation Validation of Optimizing Compilers
Fang, Yi
Abstract
|
PDF
Title: Translation Validation of Optimizing Compilers
Candidate: Fang, Yi
Advisor(s): Pnueli, Amir; Zuck, Lenore
Abstract:
There is a growing awareness, both in industry and academia, of the crucial role of formally verifying the translation from high-level source-code into low-level object code that is typically performed by an optimizing comiler. Formally verifying an optimizing compiler, as one woule verify any other large program, is not feasible due to its size, ongoing evolution and modification, and possibly, proprietary considerations. Translation validation is a novel approach that offers an alternative to the verification of translator in general and compilers in particular: Rather than verifying the compiler itself, one constructs a validation tool which, after every run of the compiler, formally confirms that the target code produced in the run is a correct translation of the source program. This thesis work takes an important step towards ensuring an extremely high level of confidence in compilers targeted at EPIC architectures.
In this thesis, we focus on the translation validation of structure preserving optimizations, i.e. transformations that do not modify programs' structure in a major way. This category of optimizations covers most of the global optimizations performed by compilers. This thesis has two main parts. One develops a proof rule that formally establishes the correctness of structure preserving transformation based on computational induction. The other part is the development of a tool that applies the proof rule to the automatic validation of global optimizaitons performed by Intel's ORC compiler for IA-64 architecture. With minimal instrumentation from the compiler, the tool constructs ''verification conditions'' -- formal theorems that, if valid, establish the correctness of a translation. The verificaiton conditions are then transferred to an automatic theorem prover that checks their validity. Together, the tool offers a fully automatic method to formally establish the correctness of each translation.
-
TR2005-875
2005
Nonlinear Image Representation via Local Multiscale Orientation
Hammond, David K.;
Simoncelli, Eero P.
Abstract
|
PDF
Title: Nonlinear Image Representation via Local Multiscale Orientation
Author(s): Hammond, David K.; Simoncelli, Eero P.
Abstract:
We present a nonlinear image representation based on multiscale local orientation measurements. Specifically, an image is first decomposed using a two-orientation steerable pyramid, a tight-frame representation in which the basis functions are directional derivatives of a radially symmetric blurring operator. The pair of subbands at each scale are thus gradients of progressively blurred copies of the original image. We then discard the magnitude information and retain only the orientation of each gradient vector. We develop a method for reconstructing the original image from this orientation information using an algorithm based on projection onto convex sets, and demonstrate its robustness to quantization.
-
TR2005-866
2005
An Analysis of Usage Locality for Data-Centric Web Services
He, Congchun;
Karamcheti, Vijay
Abstract
|
PDF
Title: An Analysis of Usage Locality for Data-Centric Web Services
Author(s): He, Congchun; Karamcheti, Vijay
Abstract:
The growing popularity of XML Web Services is resulting in a significant increase in the proportion of Internet traffic that involves requests to and responses from Web Services. Unfortunately, web service responses, because they are generated dynamically, are considered ``uncacheable" by traditional caching infrastructures. One way of remedying this situation is by developing alternative caching infrastructures, which improve performance using on-demand service replication, data offloading, and request redirection. These infrastructures benefit from two characteristics of web service traffic --- (1) the open nature of the underlying protocols, SOAP, WSDL, UDDI, which results in service requests and responses adhering to a well-formatted, widely known structure; and (2) the observation that for a large number of currently deployed data-centric services, requests can be interpreted as structured accesses against a physical or virtual database --- but require that there be sufficient locality in service usage to offset replication and redirection costs.
This paper investigates whether such locality does in fact exist in current web service workloads. We examine access logs from two large data-centric web service sites, SkyServer and TerraServer, to characterize workload locality across several dimensions: data space, network regions, and different time epochs. Our results show that both workloads exhibit a high degree of spatial and network locality: 10\% of the client IP addresses in the SkyServer trace contribute to about 99.95\% of the requests, and 99.94\% of the requests in the TerraServer trace are directed towards regions that represent less than 10\% of the overall data space accessible through the service. Our results point to the substantial opportunity for improving Web Services scalability by on-demand service replication.
-
TR2005-877
2005
Oriented Overlays For Clustering Client Requests To Data-Centric Network Services
He, Congchun;
Karamcheti, Vijay
Abstract
|
PDF
Title: Oriented Overlays For Clustering Client Requests To Data-Centric Network Services
Author(s): He, Congchun; Karamcheti, Vijay
Abstract:
Many of the data-centric network services deployed today hold massive volumes of data at their origin websites, accessing the data to dynamically generate responses. Such dynamic responses are poorly supported by traditional caching infrastructures and result in poor performance and scalability for such services. One way of remedying this situation is to develop alternative caching infrastructures, which can dynamically detect the often large degree of service usage locality and leverage such information to on-demand replicate and redirect requests to service portions at appropriate network locations. Key to building such infrastructures is the ability to cluster and inspect client requests, at various points across a wide-area network.
This paper presents a zone-based scheme for constructing oriented overlays, which provide such an ability. Oriented overlays differ from previously proposed unstructured overlays in supporting network traffic flows from many sources towards one (or a small number) of destinations, and vice-versa. A good oriented overlay would offer sufficient clustering ability without adversely affecting path latencies. Our overlay construction scheme organizes participating nodes into different zones according to their latencies from the origin server(s), and has each node associate with one or more parents in another zone closer to the origin. Extensive experiments with a PlanetLab-based implementation of our scheme shows that it produces overlays that are (1) robust to network dynamics; (2) offer good clustering ability; and (3) minimally impact end-to-end network latencies seen by clients.
-
Ph.D. Thesis
2005
Translation Validation of Loop Optimizations
Hu, Ying
Abstract
|
PDF
Title: Translation Validation of Loop Optimizations
Candidate: Hu, Ying
Advisor(s): Goldberg, Benjamin; Barrett, Clark
Abstract:
Formal verification is important in designing reliable computer systems. For a critical software system, it is not enough to have a proof of correctness for the source code, there must also be an assurance that the compiler produces a correct translation of the source code into the target machine code. Verifying the correctness of modern optimizing compilers is a challenging task because of their size, their complexity, and their evolution over time.
In this thesis, we focus on the Translation Validation of loop optimizations. In order to validate the optimizations performed by the compiler, we try to prove the equivalence of the intermediate codes before and after the optimizations. There were previously a set of proof rules for building the equivalence relation between two programs. However, they cannot validate some cases with legal loop optimizations. We propose new proof rules to consider the conditions of loops and possible elimination of some loops, so that those cases can also be handled. According to these new proof rules, algorithms are designed to apply them to an automatic validation process.
Based on the above proof rules, we implement an automatic validation tool for loop optimizations which analyzes the loops, guesses what kinds of loop optimizations occur, proves the validity of a combination of loop optimizations, and synthesizes a series of intermediate codes. We integrate this new loop tool into our translation validation tool TVOC, so that TVOC handles not only optimizations which do not significantly change the structure of the code, but also loop optimizations which do change the structure greatly. With this new part, TVOC has succeeded in validating many examples with loop optimizations.
Speculative optimizations are the aggressive optimizations that are only correct under certain conditions that cannot be known at compile time. In this thesis, we present the theory and algorithms for validating speculative optimizations and generating the runtime tests necessary for speculative optimizations. We also provide several examples and the results of the algorithms for speculative optimizations.
-
TR2005-870
2005
Two-Level Schwarz Algorithms, Using Overlapping Subregions, for Mortar Finite Element Methods
Hyun Kim, Hyea;
Widlund, Olof B.
Abstract
|
PDF
Title: Two-Level Schwarz Algorithms, Using Overlapping Subregions, for Mortar Finite Element Methods
Author(s): Hyun Kim, Hyea; Widlund, Olof B.
Abstract:
Preconditioned conjugate gradient methods based on two-level overlapping Schwarz methods often perform quite well. Such a preconditioner combines a coarse space solver with local components which are defined in terms of subregions which form an overlapping covering of the region on which the elliptic problem is defined. Precise bounds on the rate of convergence of such iterative methods have previously been obtained in the case of conforming lower order and spectral finite elements as well as in a number of other cases. In this paper, this domain decomposition algorithm and analysis are extended to mortar finite elements. It is established that the condition number of the relevant iteration operator is independent of the number of subregions and varies with the relative overlap between neighboring subregions linearly as in the conforming cases previously considered.
-
Ph.D. Thesis
2005
Construction of Component-Based Applications by Planning
Kichkaylo, Tatiana
Abstract
|
PDF
Title: Construction of Component-Based Applications by Planning
Candidate: Kichkaylo, Tatiana
Advisor(s): Karamcheti, Vijay; Ernest Davis
Abstract:
Many modern wide-area distributed systems are component-based. This approach provides great flexibility in adapting applications to the changing state of the environment and user requirements, but increases the complexity of configuring the applications. Because of the scale and heterogeneity of modern wide-area environments, manual configuration is hard, inefficient, suboptimal, and error-prone. Automated application configuration is desired.
Constructing distributed applications requires choosing a set of components that will constitute the application instance and assigning network resources to component executions and data transfers. Stated this way, the application configuration problem (ACP) is similar to the planning (action selection) and scheduling (resource allocation) problems studied by the Artificial Intelligence (AI) community.
This thesis investigates the problem of solving the ACP using AI planning techniques. However, the ACP poses several challenges not usually encountered and addressed by the traditional AI solutions. The problem specification for the ACP can be much larger than the solution, with the relevant portions only identified during the search. Additionally, the interactions between planning operators are numeric rather than logical. Finally, it is desirable to be able to trade off quality of the solution versus search time.
We show that the ACP is undecidable in general. Therefore, instead of a single algorithm, we propose a set of techniques that can be used to compose an algorithm for a particular variety of the ACP that can exploit natural restrictions exhibited by that variety. These techniques address the challenges above by dynamically obtaining portions of the problem specification as necessary during the search, using envelope hierarchies based on numeric information for pruning and search guidance, and discretizing continuous variables to approximate numeric parameters without restricting the form of supported numeric functions.
We illustrate these techniques by describing their use in algorithms tailored for two specific varieties of the ACP --- snapshot configurations for dynamic component-based frameworks, and scheduling of grid workflows with replica selection and explicit resource reservations. Experimental evaluation of the performance of these two algorithms shows that the techniques successfully achieve their goals, with acceptable run-time overhead.
-
TR2005-873
2005
A BDDC algorithm for problems with mortar discretization
Kim, Hyea Hyun;
Dryja, Maksymilian; Widlund, Olof B.
Abstract
|
PDF
Title: A BDDC algorithm for problems with mortar discretization
Author(s): Kim, Hyea Hyun; Dryja, Maksymilian; Widlund, Olof B.
Abstract:
A BDDC (balancing domain decomposition by constraints) algorithm is developed for elliptic problems with mortar discretizations for geometrically non-conforming partitions in both two and three spatial dimensions. The coarse component of the preconditioner is defined in terms of one mortar constraint for each edge/face which is an intersection of the boundaries of a pair of subdomains. A condition number bound of the form $C \max_i \left\{ (1+\text{log} (H_i/h_i) )3 \right\}$ is established. In geometrically conforming cases, the bound can be improved to $C \max_i \left\{ (1+\text{log} (H_i/h_i) )2 \right\}$. This estimate is also valid in the geometrically nonconforming case under an additional assumption on the ratio of mesh sizes and jumps of the coefficients. This BDDC preconditioner is also shown to be closely related to the Neumann-Dirichlet preconditioner for the FETI--DP algorithms of \cite{K-04-3d,KL-02} and it is shown that the eigenvalues of the BDDC and FETI--DP methods are the same except possibly for an eigenvalue equal to 1.
-
TR2005-863
2005
A FETI-DP formulation of three dimensional elasticity problems with mortar discretization
Kim, Hyea Hyun
Abstract
|
PDF
Title: A FETI-DP formulation of three dimensional elasticity problems with mortar discretization
Author(s): Kim, Hyea Hyun
Abstract:
In this paper, a FETI-DP formulation for the three dimensional elasticity problem on non-matching grids over a geometrically conforming subdomain partition is considered. To resolve the nonconformity of the finite elements, a mortar matching condition on the subdomain interfaces (faces) is imposed. By introducing Lagrange multipliers for the mortar matching constraints, the resulting linear system becomes similar to that of a FETI-DP method. In order to make the FETI-DP method efficient for solving this linear system, a relatively large set of primal constraints, which include average and momentum constraints over interfaces (faces) as well as vertex constraints, is introduced. A condition number bound $C(1+\text{log}(H/h))2$ for the FETI-DP formulation with a Neumann-Dirichlet preconditioner is then proved for the elasticity problems with discontinuous material parameters when only some faces are chosen as primal faces on which the average and momentum constraints will be imposed. An algorithm which selects a quite small number of primal faces is also discussed.
-
TR2005-861
2005
BDDC Algorithms for Incompressible Stokes Equations
Li, Jing;
Widlund, Olof B.
Abstract
|
PDF
Title: BDDC Algorithms for Incompressible Stokes Equations
Author(s): Li, Jing; Widlund, Olof B.
Abstract:
The purpose of this paper is to extend the BDDC (balancing domain decomposition by constraints) algorithm to saddle-point problems that arise when mixed finite element methods are used to approximate the system of incompressible Stokes equations. The BDDC algorithms are iterative substructuring methods, which form a class of domain decomposition methods based on the decomposition of the domain of the differential equations into nonoverlapping subdomains. They are defined in terms of a set of primal continuity constraints, which are enforced across the interface between the subdomains and which provide a coarse space component of the preconditioner. Sets of such constraints are identified for which bounds on the rate of convergence can be established that are just as strong as previously known bounds for the elliptic case. In fact, the preconditioned operator is effectively positive definite, which makes the use of a conjugate gradient method possible. A close connection is also established between the BDDC and FETI-DP algorithms for the Stokes case.
-
TR2004-857
2005
FETI--DP, BDDC, and Block Cholesky Methods
Li, Jing;
Widlund, Olof B.
Abstract
|
PDF
Title: FETI--DP, BDDC, and Block Cholesky Methods
Author(s): Li, Jing; Widlund, Olof B.
Abstract:
Two popular non-overlapping domain decomposition methods, the FETI--DP and BDDC algorithms, are reformulated using Block Cholesky factorizations, an approach which can provide a useful framework for the design of domain decomposition algorithms for solving symmetric positive definite linear system of equations. Instead of introducing Lagrange multipliers to enforce the coarse level, primal continuity constraints in these algorithms, a change of variables is used such that each primal constraint corresponds to an explicit degree of freedom. With the new formulations of these algorithms, a simplified proof is provided that the spectra of a pair of FETI--DP and BDDC algorithms, with the same set of primal constraints, are the same. Results of numerical experiments also confirm this result.
-
TR2005-871
2005
On the Use of Inexact Subdomain Solvers for BDDC Algorithms
Li, Jing;
Widlund, Olof B.
Abstract
|
PDF
Title: On the Use of Inexact Subdomain Solvers for BDDC Algorithms
Author(s): Li, Jing; Widlund, Olof B.
Abstract:
The standard BDDC (balancing domain decomposition by constraints) preconditioner is shown to be equivalent to a preconditioner built from a partially subassembled finite element model. This results in a system of linear algebraic equations which is much easier to solve in parallel than the fully assembled model; the cost is then often dominated by that of the problems on the subdomains. An important role is also played, both in theory and practice, by an average operator and in addition exact Dirichlet solvers are used on the subdomains in order to eliminate the residual in the interior of the subdomains. The use of inexact solvers for these problems and even the replacement of the Dirichlet solvers by a trivial extension are considered. It is established that one of the resulting algorithms has the same eigenvalues as the standard BDDC algorithm, and the connection of another with the FETI-DP algorithm with a lumped preconditioner is also considered. Multigrid methods are used in the experimental work and under certain assumptions, it can be established that the iteration count essentially remains the same as when exact solvers are used, while considerable gains in the speed of the algorithm can be realized since the cost of the exact solvers grows superlinearly with the size of the subdomain problems while the multigrid methods are linear.
-
TR2005-872
2005
Real-time rendering of normal maps with discontinuities
Parilov, Evgueni;
Rosenberg, Ilya; Zorin, Denis
Abstract
|
PDF
Title: Real-time rendering of normal maps with discontinuities
Author(s): Parilov, Evgueni; Rosenberg, Ilya; Zorin, Denis
Abstract:
Title: Real-time rendering of normal maps with discontinuities (NYU-CS-TR872) Authors: Evgueni Parilov, Ilya Rosenberg and Denis Zorin Abstract:
Normal mapping uses normal perturbations stored in a texture to give objects a more geometrically complex appearance without increasing the number of geometric primitives. Standard bi- and trilinear interpolation of normal maps works well if the normal field is continuous, but may result in visible artifacts in the areas where the field is discontinuous, which is common for surfaces with creases and dents.
In this paper we describe a real-time rendering technique which preserves the discontinuity curves of the normal field at sub-pixel level and its GPU implementation. Our representation of the piecewise-continuous normal field is based on approximations of the distance function to the discontinuity set and its gradient. Using these approximations we can efficiently reconstruct discontinuities at arbitrary resolution and ensure that no normals are interpolated across the discontinuity. We also described a method for updating the normal field along the discontinuities in real-time based on blending the original field with the one calculated from a user-defined surface profile. -
TR2005-859
2005
Algorithmic Algebraic Model Checking I: The Case of Biochemical Systems and their Reachability Analysis
Piazza, C.;
Antoniotto, M.; Mysore, V.; Policriti, A.; Winkler, F.; Mishra, B.
Abstract
|
PDF
Title: Algorithmic Algebraic Model Checking I: The Case of Biochemical Systems and their Reachability Analysis
Author(s): Piazza, C.; Antoniotto, M.; Mysore, V.; Policriti, A.; Winkler, F.; Mishra, B.
Abstract:
Presently, there is no clear way to determine if the current body of biological facts is sufficient to explain phenomenology. Rigorous mathematical models with automated tools for reasoning, simulation, and computation can be of enormous help to uncover cognitive flaws, qualitative simplification or overly generalized assumptions. The approaches developed by control theorists analyzing stability of a system with feedback, physicists studying asymptotic properties of dynamical systems, computer scientists reasoning about discrete or hybrid (combining discrete events with continuous events) reactive systems---all have tried to address some aspects of the same problem in a very concrete manner. We explore here how biological processes could be studied in a similar manner, and how the appropriate tools for this purpose can be created.
In this paper, we suggest a possible confluence of the theory of hybrid automata and the techniques of algorithmic algebra to create a computational basis for systems biology. We start by discussing our basis for this choice -- semi-algebraic hybrid systems, as we also recognize its power and limitations. We explore solutions to the bounded-reachability problem through symbolic computation methods, applied to the descriptions of the traces of the hybrid automaton. Because the description of the automaton is through semi-algebraic sets, the evolution of the automaton can be described even in cases where system parameters and initial conditions are unspecified. Nonetheless, semialgebraic decision procedures provide a succinct description of algebraic constraints over the initial values and parameters for which proper behavior of the system can be expected. In addition, by keeping track of conservation principles in terms of constraint or invariant manifolds on which the system must evolve, we avoid many of the obvious pitfalls of numerical approaches.
-
Ph.D. Thesis
2005
Extensible MultiModal Environment Toolkit (EMMET): A Toolkit for Prototyping and Remotely Testing Speech and Gesture Based Multimodal Interfaces
Robbins, Christopher A.
Abstract
|
PDF
Title: Extensible MultiModal Environment Toolkit (EMMET): A Toolkit for Prototyping and Remotely Testing Speech and Gesture Based Multimodal Interfaces
Candidate: Robbins, Christopher A.
Advisor(s): Perlin, Ken
Abstract:
Ongoing improvements to the performance and accessibility of less conventional input modalities such as speech and gesture recognition now provide new dimensions for interface designers to explore. Yet there is a scarcity of commercial applications which utilize these modalities either independently or multimodally. This scarcity partially results from a lack of development tools and design guidelines to facilitate the use of speech and gesture.
An integral aspect of the user interface design process is the ability to easily evaluate various design solutions through an iterative process of prototyping and testing. Through this process guidelines emerge that aid in the design of future interfaces. Today there is no shortage of tools supporting the development of conventional interfaces. However there do not exist resources allowing interface designers to easily prototype and quickly test, via remote distribution, interface designs utilizing speech and gesture.
The thesis work for this dissertation explores the development of an Extensible MultiModal Environment Toolkit (EMMET) for prototyping and remotely testing speech and gesture based multimodal interfaces to three-dimensional environments. The overarching goals for this toolkit are to allow its users to: explore speech and gesture based interface design without requiring an understanding of the details involved in the low-level implementation of speech or gesture recognition, quickly distribute their multimodal interface prototypes via the Web, and receive multimodal usage statistics collected remotely after each use of their application.
EMMET ultimately contributes to the field of multimodal user interface design by providing an environment to existing user interface developers in which speech and gesture recognition have been seamlessly integrated into their palette of user input options. Such seamless integration serves to increase the utilization within applications of speech and gesture modalities by removing any actual or perceived deterrents to the use of these modalities versus the use of conventional modalities. EMMET additionally strives to improve the quality of speech and gesture based interfaces by supporting the prototype-and-test development cycle through its Web distribution and usage statistics collection capabilities. These capabilities also allow developers to realize new design guidelines specific to the use of speech and gesture.
-
TR2005-874
2005
Ranking with a P-norm Push
Rudin, Cynthia
Abstract
|
PDF
Title: Ranking with a P-norm Push
Author(s): Rudin, Cynthia
Abstract:
We are interested in supervised ranking with the following twist: our goal is to design algorithms that perform especially well near the top of the ranked list, and are only required to perform sufficiently well on the rest of the list. Towards this goal, we provide a general form of convex objective that gives high-scoring examples more importance. This ``push'' near the top of the list can be chosen arbitrarily large or small. We choose $\ell_p$-norms to provide a specific type of push; as $p$ becomes large, the algorithm concentrates harder near the top of the list.
We derive a generalization bound based on the $p$-norm objective. We then derive a corresponding boosting-style algorithm, and illustrate the usefulness of the algorithm through experiments on UCI data.
-
TR2005-876
2005
Better Burst Detection
Shasha, Dennis;
Zhang, Xin
Abstract
|
PDF
Title: Better Burst Detection
Author(s): Shasha, Dennis; Zhang, Xin
Abstract:
A burst is a large number of events occurring within a certain time window. As an unusual activity, it's a noteworthy phenomenon in many natural and social processes. Many data stream applications require the detection of bursts across a variety of window sizes. For example, stock traders may be interested in bursts having to do with institutional purchases or sales that are spread out over minutes or hours. Detecting a burst over any of $k$ window sizes, a problem we call {\em elastic burst detection}, in a stream of length $N$ naively requires $O(kN)$ time. Previous work \cite{DiscoveryBook03} showed that a simple Shifted Binary Tree structure can reduce this time substantially (in very favorable cases near to $O(N)$) by filtering away obvious non-bursts. Unfortunately, for certain data distributions, the filter marks many windows of events as possible bursts, even though a detailed check shows them to be non-bursts.
In this paper, we present a new algorithmic framework for elastic burst detection: a family of data structures that generalizes the Shifted Binary Tree. We then present a heuristic search algorithm to find an efficient structure among the many offered by the framework, given the input. We study how different inputs affect the desired structures. Experiments on both synthetic and real world data show a factor of up to 35 times improvement compared with the Shifted Binary Tree over a wide variety of inputs, depending on the data distribution. We show an example application that identifies interesting correlations between bursts of activity in different stocks.
-
TR2005-868
2005
Modeling Of Concurrent Web Sessions With Bounded Inconsistency In Shared Data
Totok, Alexander;
Karamcheti, Vijay
Abstract
|
PDF
Title: Modeling Of Concurrent Web Sessions With Bounded Inconsistency In Shared Data
Author(s): Totok, Alexander; Karamcheti, Vijay
Abstract:
Client interactions with modern web-accessible network services are typically organized into sessions involving multiple requests that read and write shared application data. Therefore when executed concurrently, web sessions may invalidate each other's data. Depending on the nature of the business represented by the service, allowing the session with invalid data to progress might lead to financial penalties for the service provider, while blocking the session's progress and deferring its execution (e.g., by relaying its handling to the customer service) will most probably result in user dissatisfaction. A compromise would be to tolerate some bounded data inconsistency, which would allow most of the sessions to progress, while limiting the potential financial loss incurred by the service. In order to quantitatively reason about these tradeoffs, the service provider can benefit from models that predict metrics, such as the percentage of successfully completed sessions, for a certain degree of tolerable data inconsistency.
This paper develops such analytical models of concurrent web sessions with bounded inconsistency in shared data for three popular concurrency control algorithms. We illustrate our models using the sample buyer scenario from the TPC-W e-Commerce benchmark, and validate them by showing their close correspondence to measured results of concurrent session execution in both a simulated and a real web server environment. Our models take as input parameters of service usage, which can be obtained through profiling of incoming client requests. We augment our web application server environment with a profiling and automated decision making infrastructure which is shown to successfully choose, based on the specified performance metric, the best concurrency control algorithm in real time in response to changing service usage patterns.
-
Ph.D. Thesis
2005
Pattern Discovery for Hypotheses Generation in Biology
Tsirigos, Aristotelis
Abstract
|
PDF
Title: Pattern Discovery for Hypotheses Generation in Biology
Candidate: Tsirigos, Aristotelis
Advisor(s): Shasha, Dennis
Abstract:
In recent years, the increase in the amounts of available genomic as well as gene expression data has provided researchers with the necessary information to train and test various models of gene origin, evolution, function and regulation. In this thesis, we present novel solutions to key problems in computational biology that deal with nucleotide sequences (horizontal gene transfer detection), amino-acid sequences (protein sub-cellular localization prediction), and gene expression data (transcription factor - binding site pair discovery). Different pattern discovery techniques are utilized, such as maximal sequence motif discovery and maximal itemset discovery, and combined with support vector machines in order to achieve significant improvements against previously proposed methods.
-
TR2005-865
2005
A BDDC algorithm for flow in porous media with a hybrid finite element discretization
Tu, Xuemin
Abstract
|
PDF
Title: A BDDC algorithm for flow in porous media with a hybrid finite element discretization
Author(s): Tu, Xuemin
Abstract:
The BDDC (balancing domain decomposition by constraints) methods have been applied successfully to solve the large sparse linear algebraic systems arising from conforming finite element discretizations of elliptic boundary value problems. In this paper, the scalar elliptic problems for flow in porous media are discretized by a hybrid finite element method which is equivalent to a nonconforming finite element method. The BDDC algorithm is extended to these problems which originate as saddle point problems.
Edge/face average constraints are enforced across the interface and the same rate of convergence is obtained as in conforming cases. The condition number of the preconditioned system is estimated and numerical experiments are discussed.
-
TR2005-864
2005
A BDDC Algorithm for Mixed Formulation of Flow in Porous Media
Tu, Xuemin
Abstract
|
PDF
Title: A BDDC Algorithm for Mixed Formulation of Flow in Porous Media
Author(s): Tu, Xuemin
Abstract:
The BDDC (balancing domain decomposition by constraints) algorithms are similar to the balancing Neumann-Neumann methods, with a small number of continuity constraints enforced across the interface throughout the iterations. These constraints form a coarse, global component of the preconditioner. The BDDC methods are powerful for solving large sparse linear algebraic systems arising from discretizations of elliptic boundary value problems. In this paper, the BDDC algorithm is extended to saddle point problems generated from the mixed finite element methods used to approximate the scalar elliptic problems for flow in porous media.
Edge/face average constraints are enforced and the same rate of convergence is obtained as for simple elliptic cases. The condition number bound is estimated and numerical experiments are discussed. In addition, a comparison of the BDDC method with an edge/face-based iterative substructuring method is provided.
-
TR2005-879
2005
BDDC Domain Decomposition Algorithms: Methods with Three Levels and for Flow in Porous Media
Tu, Xuemin
Abstract
|
PDF
Title: BDDC Domain Decomposition Algorithms: Methods with Three Levels and for Flow in Porous Media
Author(s): Tu, Xuemin
Abstract:
Two inexact coarse solvers for Balancing Domain Decomposition by Constraints (BDDC) algorithms are introduced and analyzed. These solvers help remove a bottleneck for the two-level BDDC algorithms related to the cost of the coarse problem when the number of subdomains is large. At the same time, a good convergence rate is maintained.
BDDC algorithms are also developed for the linear systems arising from flow in porous media discretized with mixed and hybrid finite elements. Our methods are proven to be scalable and the condition numbers of the operators with our BDDC preconditioners grow only polylogarithmically with the size of the subdomain problems.
-
TR2005-862
2005
Three-Level BDDC in Three Dimensions
Tu, Xuemin
Abstract
|
PDF
Title: Three-Level BDDC in Three Dimensions
Author(s): Tu, Xuemin
Abstract:
BDDC methods are nonoverlapping iterative substructuring domain decomposition methods for the solution of large sparse linear algebraic systems arising from discretization of elliptic boundary value problems. Its coarse problem is given by a small number of continuity constraints which are enforced across the interface. The coarse problem matrix is generated and factored by direct solvers at the beginning of the computation and it can ultimately become a bottleneck, if the number of subdomains is very large.
In this paper, two three-level BDDC methods are introduced for solving the coarse problem approximately in three dimensions. This is an extension of previous work for the two dimensional case and since vertex constraints alone do not suffice to obtain polylogarithmic condition number bound, edge constraints are considered in this paper. Some new technical tools are then needed in the analysis and this makes the three dimensional case more complicated than the two dimensional case.
Estimates of the condition numbers are provided for two three-level BDDC methods and numerical experiments are also discussed.
-
Ph.D. Thesis
2005
Automatic Verification of Parameterized Systems
Xu, Jiazhao
Abstract
|
PDF
Title: Automatic Verification of Parameterized Systems
Candidate: Xu, Jiazhao
Advisor(s): Pnueli, Amir
Abstract:
Verification plays an indispensable role in designing reliable computer hardware and software systems. With the fast growth in design complexity and the quick turnaround in design time, formal verification has become an increasingly important technology for establishing correctness as well as for finding difficult bugs. Since there is no ``silver-bullet'' to solve all verification problems, a spectrum of powerful techniques in formal verification have been developed to tackle different verification problems and complexity issues. Depending on the nature of the problem whose most salient components are the system implementation and the property specification, a proper methodology or a combination of different techniques is applied to solve the problem.
In this thesis, we focus on the research and development of formal methods to uniformly verify parameterized systems. A parameterized system is a class of systems obtained by instantiating the system parameters. Parameterized verification seeks a single correctness proof of a property for the entire class. Although the general parameterized verification problem is undecidable [AK86], it is possible to solve special classes by applying a repertoire of techniques and heuristics. Many methods in parameterized verification require a great deal of human interaction. This makes the application of these methods to real world problems infeasible. Thus, the main focus of this research is to develop techniques that can be automated to deliver proofs of safety and liveness properties.
Our research combines various formal techniques such as deductive methods, abstraction and model checking. One main result in this thesis is an automatic deductive method for parameterized verification. We apply small model properties of Bounded Data Systems (a special type of parameterized system) to help prove deductive inference rules for the safety properties of BDS systems. Another methodology we developed enables us to prove liveness properties of parameterized systems via an automatic abstraction method called counter abstraction . There are several useful by-products from our research: A set of heuristics is established for the automatic generation of program invariants which can benefit deductive verification in general; also we proposed methodologies for the automatic abstraction of fairness conditions that are crucial for proving liveness properties.
-
Ph.D. Thesis
2005
Mobility, Route Caching, and TCP Performance in Mobile Ad Hoc Networks
Yu, Xin
Abstract
|
PDF
Title: Mobility, Route Caching, and TCP Performance in Mobile Ad Hoc Networks
Candidate: Yu, Xin
Advisor(s): Johnson, David B.
Abstract:
In a mobile ad hoc network, mobile nodes communicate with each other through wireless links. Mobility causes frequent topology changes. This thesis addresses the fundamental challenges mobility presents to on-demand routing protocols and to TCP.
On-demand routing protocols use route caches to make routing decisions. Due to mobility, cached routes easily become stale. To address the cache staleness issue, prior work used adaptive timeout mechanisms. However, heuristics cannot accurately estimate timeouts because topology changes are unpredictable. I propose to proactively disseminate the broken link information to the nodes that have cached the link. I define a new cache structure called a cache table to maintain the information necessary for cache updates, and design a distributed cache update algorithm. This algorithm is the first work that proactively updates route caches in an adaptive manner. Simulation results show that proactive cache updating is more efficient than adaptive timeout mechanisms. I conclude that proactive cache updating is key to the adaptation of on-demand routing protocols to mobility.
TCP does not perform well in mobile ad hoc networks. Prior work provided link failure feedback to TCP so that it can avoid invoking congestion control mechanisms for packet losses caused by route failures. Simulation results show that my cache update algorithm significantly improves TCP throughput since it reduces the effect of mobility on TCP. TCP still suffers from frequent data and ACK losses. I propose to make routing protocols aware of lost TCP packets and help reduce TCP timeouts. I design two mechanisms that exploit cross-layer information awareness: early packet loss notification (EPLN) and best-effort ACK delivery (BEAD). EPLN notifies TCP senders about lost data. BEAD retransmits ACKs at intermediate nodes or at TCP receivers. Simulation results show that the two mechanisms significantly improve TCP throughput. I conclude that cross-layer information awareness is key to making TCP efficient in the presence of mobility.
I also study the impact of route caching strategies on the scalability of on-demand routing protocols with mobility. I show that making route caches adapt quickly and efficiently to topology changes is key to the scalability of on-demand routing protocols with mobility.
-
Ph.D. Thesis
2005
Information Extraction from Multiple Syntactic Sources
Zhao, Shubin
Abstract
|
PDF
Title: Information Extraction from Multiple Syntactic Sources
Candidate: Zhao, Shubin
Advisor(s): Grishman, Ralph
Abstract:
Information Extraction is the automatic extraction of facts from text, which includes detection of named entities, entity relations and events. Conventional approaches to Information Extraction try to find syntactic patterns based on deep processing of text, such as partial or full parsing. The problem these solutions have to face is that as deeper analysis is used, the accuracy of the result decreases, and one cannot recover from the induced errors. On the other hand, lower level processing is more accurate and it can also provide useful information. However, within the framework of conventional approaches, this kind of information can not be efficiently incorporated.
This thesis describes a novel supervised approach based on kernel methods to address these issues. In this approach customized kernels are used to match syntactic structures produced from different preprocessing phases. Using properties of a kernel, individual kernels are combined into composite kernels to integrate and extend all the information. The composite kernels can be used with various classifiers, such as Nearest Neighbor or Support Vector Machines (SVM). The main classifier we propose to use is SVM due to its ability to generalize in large dimensional feature spaces. We will show that each level of syntactic information can contribute to IE tasks, and low level information can help to recover from errors in deep processing.
The new approach has demonstrated state-of-the-art performance on two benchmark tasks. The first task is detecting slot fillers for management succession events (MUC-6). For this task two types of kernels were designed, a surface kernel based on word n-grams and a kernel built on sentence dependency trees; the second task is the ACE RDR evaluation, which is to recognize relations between entities in text from newswire and broadcast news transcript. For this task, five kernels were built to represent information from sentence tokenization, syntactic parsing and dependency parsing. Experimental results for the two tasks will be shown and discussed.
-
TR2004-852
2004
Fast and Cheap Genome wide Haplotype Construction via Optical Mapping
Anantharaman, Thomas;
Mysore, Venkatesh; Mishra, Bud
Abstract
|
PDF
Title: Fast and Cheap Genome wide Haplotype Construction via Optical Mapping
Author(s): Anantharaman, Thomas; Mysore, Venkatesh; Mishra, Bud
Abstract:
We describe an efficient algorithm to construct genome wide haplotype restriction maps of an individual by aligning single molecule DNA fragments collected with Optical Mapping technology. Using this algorithm and small amount of genomic material, we can construct the parental haplotypes for each diploid chromosome for any individual, one from the father and the other from the mother. Since such haplotype maps reveal the polymorphisms due to single nucleotide differences (SNPs) and small insertions and deletions (RFLPs), they are useful in association studies, studies involving genomic instabilities in cancer, and genetics. For instance, such haplotype restriction maps of individuals in a population can be used in association studies to locate genes responsible for genetics diseases with relatively low cost and high throughput. If the underlying problem is formulated as a combinatorial optimization problem, it can be shown to be NP-complete (a special case of K-population problem). But by effectively exploiting the structure of the underlying error processes and using a novel analog of the Baum-Welch algorithm for HMM models, we devise a probabilistic algorithm with a time complexity that is linear in the number of markers. The algorithms were tested by constructing the first genome wide haplotype restriction map of the microbe T. Pseudoana, as well as constructing a haplotype restriction map of a 120 Megabase region of Human chromosome 4. The frequency of false positives and false negatives was estimated using simulated data. The empirical results were found very promising.
-
TR2004-853
2004
Naturally Speaking: A Systems Biology Tool with Natural Language Interfaces
Antoniotti, Marco;
Lau, Ian T.; Mishra, Bud
Abstract
|
PDF
Title: Naturally Speaking: A Systems Biology Tool with Natural Language Interfaces
Author(s): Antoniotti, Marco; Lau, Ian T.; Mishra, Bud
Abstract:
This short paper describes a systems biology software tool that can engage in a dialogue with a biologist by responding to questions posed to it in English (or another natural language) regarding the behavior of a complex biological system, and by suggesting a set of facts about the biological system based on a timetested generate and test approach. Thus, this bioinformatics system improves the quality of the interaction that a biologist can have with a system built on rigorous mathematical modeling, but without being aware of the underlying mathematically sophisticated concepts or notations. Given the nature of the mathematical semantics of our Simpathica/XSSYS tool, it was possible to construct a well-founded natural language interface on top of the computational kernel. We discuss our tool and illustrate its use with a few examples. The natural language subsystem is available as an integrated subsystem of the Simpathica/XSSYS tool and through a simple Web-based interface; we describe both systems in the paper. More details about the system can be found at: http://bioinformatics.nyu.edu, and its sub-pages.
-
TR2004-854
2004
Practical Packrat Parsing
Grimm, Robert
Abstract
|
PDF
Title: Practical Packrat Parsing
Author(s): Grimm, Robert
Abstract:
A considerable number of research projects are exploring how to extend object-oriented programming languages such as Java with, for example, support for generics, multiple dispatch, or pattern matching. To keep up with these changes, language implementors need appropriate tools. In this context, easily extensible parser generators are especially important because parsing program sources is a necessary first step for any language processor, be it a compiler, syntax-highlighting editor, or API documentation generator. Unfortunately, context-free grammars and the corresponding LR or LL parsers, while well understood and widely used, are also unnecessarily hard to extend. To address this lack of appropriate tools, we introduce Rats!, a parser generator for Java that supports easily modifiable grammars and avoids the complexities associated with altering LR or LL grammars. Our work builds on recent research on packrat parsers, which are recursive descent parsers that perform backtracking but also memoize all intermediate results (hence their name), thus ensuring linear-time performance. Our work makes this parsing technique, which has been developed in the context of functional programming languages, practical for object-oriented languages. Furthermore, our parser generator supports simpler grammar specifications and more convenient error reporting, while also producing better performing parsers through aggressive optimizations. In this paper, we motivate the need for more easily extensible parsers, describe our parser generator and its optimizations in detail, and present the results of our experimental evaluation.
-
Ph.D. Thesis
2004
Partitionable Services Framework: Seamless Access to Distributed Applications
Ivan, Anca
Abstract
|
PDF
Title: Partitionable Services Framework: Seamless Access to Distributed Applications
Candidate: Ivan, Anca
Advisor(s): Karamcheti, Vijay
Abstract:
A key problem in contemporary distributed systems is how to satisfy user quality of service (QoS) requirements for distributed applications deployed in heterogeneous, dynamically changing environments spanning multiple administrative domains.
An attractive solution is to create an infrastructure which satisfies user QoS requirements by automatically and transparently adapting distributed applications to any environment changes with minimum user input. However, successful use of this approach requires overcoming three challenges: (1) Capturing the application behavior and its relationship with the environment as a set of compact local specifications, using both general, quantitative (e.g., CPU usage) and qualitative (e.g., security) properties. Such information should be sufficient to reason about the global behavior of the application deployment. (2) Finding the ``best'' application deployment that satisfies both application and user requirements, and the various domain policies. The search algorithm should be complete, efficient, scalable with regard to application and network sizes, and guarantee optimality (e.g., resources consumed by applications). (3) Ensuring that the found deployments are practical and efficient, i.e., that the efficiency of automatic deployments is comparable with the efficiency of hand-tuned solutions.
This dissertation describes three techniques that address these challenges in the context of component-based applications. The modularity and reusability of the latter enable automatic deployments while supporting reasoning about the global connectivity based on the local information exposed by each component. The first technique extends the basic component-based application model with information about conditions and effects of component deployments and linkages, together with interactions between components and the network. The second technique uses AI planning to build an efficient and scalable algorithm which exploits the expressivity of the application model to find an application deployment that satisfies user QoS and application requirements. The last technique ensures that application deployments are both practical and efficient, by leveraging language and run-time system support to automatically customize components, as appropriate for the desired security and data consistency guarantees. These techniques are implemented as integral parts of the Partitionable Services Framework (PSF), a Java-based framework which flexibly assembles component-based applications to suit the properties of their environment. PSF facilitates on-demand, transparent migration and replication of application components at locations closer to clients, while retaining the illusion of a monolithic application.
The benefits of PSF are evaluated by deploying representative component-based applications in an environment simulating fast and secure domains connected by slow and insecure links. Analysis of the programming and the deployment processes shows that: (1) the code modifications required by PSF are minimal,(2) PSF appropriately adapts the deployments based on the network state and user QoS requirements, (3) the run-time deployment overheads incurred by PSF are negligible compared to the application lifetime, and (4) the efficiency of PSF-deployed applications matches that of hand-crafted solutions.
-
TR2004-851
2004
Sekitei: An AI planner for Constrained Component Deployment in Wide-Area Networks
Kichkaylo, Tatiana;
Ivan, Anca; Karamcheti, Vijay
Abstract
|
PDF
Title: Sekitei: An AI planner for Constrained Component Deployment in Wide-Area Networks
Author(s): Kichkaylo, Tatiana; Ivan, Anca; Karamcheti, Vijay
Abstract:
Wide-area network applications are increasingly being built using component-based models, enabling integration of diverse functionality in modules distributed across the network. In such models, dynamic component selection and deployment enables an application to flexibly adapt to changing client and network characteristics, achieve loadbalancing, and satisfy QoS requirements. Unfortunately, the problem of finding a valid component deployment is hard because one needs to decide on the set of components while satisfying various constraints resulting from application semantic requirements, network resource limitations, and interactions between the two. In this paper, we describe a general model for the component placement problem and present an algorithm for it, which is based on AI planning algorithms. We validate the effectiveness of our algorithm by demonstrating its scalability with respect to network size and number of components in the context of deployments generated for two example applications a security-sensitive mail service, and a webcast service in a variety of network environments.
-
TR2004-855
2004
Dual-Primal FETI Methods for Linear Elasticity
Klawonn, Axel;
Widlund, Olof B.
Abstract
|
PDF
Title: Dual-Primal FETI Methods for Linear Elasticity
Author(s): Klawonn, Axel; Widlund, Olof B.
Abstract:
Dual-Primal FETI methods are nonoverlapping domain decomposition methods where some of the continuity constraints across subdomain boundaries are required to hold throughout the iterations, as in primal iterative substructuring methods, while most of the constraints are enforced by Lagrange multipliers, as in one-level FETI methods. The purpose of this article is to develop strategies for selecting these constraints, which are enforced throughout the iterations, such that good convergence bounds are obtained, which are independent of even large changes in the stiffnesses of the subdomains across the interface between them. A theoretical analysis is provided and condition number bounds are established which are uniform with respect to arbitrarily large jumps in the Young's modulus of the material and otherwise only depend polylogarithmically on the number of unknowns of a single subdomain.
-
Ph.D. Thesis
2004
VALIS: A Multi-language System for Rapid Prototyping in Computational Biology
Paxia, Salvatore
Abstract
|
PDF
Title: VALIS: A Multi-language System for Rapid Prototyping in Computational Biology
Candidate: Paxia, Salvatore
Advisor(s): Mishra, Bud
Abstract:
Bioinformatics is a challenging area for computer science, since the underlying computational formalisms span database systems, numerical methods, geometric modeling and visualization, imaging and image analysis, combinatorial algorithms, data analysis and mining, statistical approaches, and reasoning under uncertainty.
This thesis describes the Valis environment for rapid application prototyping in bioinformatics. The core components of the Valis system are the underlying database structure and the algorithmic development platform.
This thesis presents a novel set of data structures that has marked advantages when dealing with unstructured and unbounded data that are common in scientific fields and bioinformatics.
Bioinformatics problems rarely have a one-language, one-platform solution. The Valis environment allows seamless integration between scripts written in different programming languages and includes tools to rapidly prototype graphical user interfaces.
To date the speed of computation of most whole genome analysis tools have stood in the way of developing fast interactive programs that may be used as exploratory tools. This thesis presents the basic algorithms and widgets that permit rapid prototyping of whole genomic scale real-time applications within Valis.
-
Ph.D. Thesis
2004
Thick Surfaces: Interactive Modeling of Topologically Complex Geometric Details
Peng, Jianbo
Abstract
|
PDF
Title: Thick Surfaces: Interactive Modeling of Topologically Complex Geometric Details
Candidate: Peng, Jianbo
Advisor(s): Zorin, Denis
Abstract:
Lots of objects in computer graphics applications are represented by surfaces. It works very well for objects of simple topology, but can get prohibitively expensive for objects with complex small-scale geometrical details.
Volumetric textures aligned with a surface can be used to add topologically complex geometric details to an object, while retaining an underlying simple surface structure. The simple surface structure provides great controllability on the overall shape of the model, and volumetric textures handle geometric details and topological changes efficiently.
Adding a volumetric texture to a surface requires more than a conventional twodimensional parameterization: a part of the space surrounding the surface has to be parameterized. Another problem with using volumetric textures for adding geometric detail is the difficulty of the rendering of implicitly represented surfaces, especially when they are changed interactively.
We introduce thick surfaces to represent objects with topologically complex geometric details. A thick surface consists of three components. First, a base mesh of simple structure is used to approximate the overall shape of the object. Second, a layer of space along the base mesh is parameterized. We define the layer of space as a shell, which covers the geometric details of the object. Third, volumetric textures of geometric details are mapped into the shell. The object is represented as the implicit surface encoded by the volumetric textures. Places without volumetric textures are filled with patches of the base mesh.
We present algorithms for constructing a shell around a surface and rendering a volumetric-textured surface. Mipmap technique for volumetric textures is explored as well. The gradient field of a generalized distance function is used to construct a non-self-intersecting shell, which has other properties desirable for volumetric texture mapping. The rendering algorithm is designed and implemented on NVIDIA GeForceFX video chips. Finally we demonstrate a number of interactive operations that these algorithms enable.
-
Ph.D. Thesis
2004
TM-LPSAT: Encoding Temporal Metric Planning in Continuous Time
Shin, Ji-Ae
Abstract
|
PDF
Title: TM-LPSAT: Encoding Temporal Metric Planning in Continuous Time
Candidate: Shin, Ji-Ae
Advisor(s): Davis, Ernest
Abstract:
In any domain with change, the dimension of time is inherently involved. Whether the domain should be modeled in discrete time or continuous time depends on aspects of the domain to be modeled. Many complex real-world domains involve continuous time, resources, metric quantities and concurrent actions. Planning in such domains must necessarily go beyond simple discrete models of time and change.
In this thesis, we show how the SAT-based planning framework can be extended to generate plans of concurrent asynchronous actions that may depend on or make change piecewise linear metric constraints in continuous time.
In the SAT-based planning framework, a planning problem is formulated as a satisfiability problem of a set of propositional constraints (axioms) such that any model of the axioms corresponds to a valid plan. There are two parameters to a SAT-based planning system: an encoding scheme for representing plans of bounded length and a propositional SAT solver to search for a model. The LPSAT architecture is composed of a SAT solver integrated with a linear arithmetic constraint solver in order to deal with metric aspects of domains.
We present encoding schemes for temporal models of continuous time defined in PDDL+: ( i ) Durative actions with discrete and/or continuous changes; (ii) Real-time temporal model with exogenous events and autonomous processes capturing continuous changes. The encoding represents, in a CNF formula over arithmetic constraints and propositional fluents, time-stamped parallel plans possibly with concurrent continuous and/or discrete changes. In addition, we present encoding schemes for multi-capacity resources, partitioned interval resources, and metric quantities which are represented as intervals. An interval type can be used as a parameter to action as well as a fluent type.
Based on the LPSAT engine, the TM-LPSAT temporal metric planner has been implemented: Given a PDDL+ representation of a planning problem, the compiler of TM-LPSAT translates it in a CNF formula, which is fed into the LPSAT engine to find a solution corresponding to a plan for the planning problem. We also have experimented on our temporal metric encodings with other decision procedure, MathSAT, which deals with propositional combinations of linear constraints and Boolean variables. The results show that in terms of searching time the SAT-based approach to temporal metric planning can be comparable to other planning approaches and there is plenty of room to push further the limits of the SAT-based approach.
-
TR2004-850
2004
Optical flow estimation as distributed optimization problem - an aVLSI implementation
Stocker, Alan
Abstract
|
PDF
Title: Optical flow estimation as distributed optimization problem - an aVLSI implementation
Author(s): Stocker, Alan
Abstract:
I present a new focal-plane analog VLSI sensor that estimates optical flow in two visual dimensions. The chip significantly improves previous approaches both with respect to the applied model of optical flow estimation as well as the actual hardware implementation. Its distributed computational architecture consists of an array of locally connected motion units that collectively solve for the unique optimal optical flow estimate. The novel gradient-based motion model assumes visual motion to be translational, smooth and biased. The model guarantees that the estimation problem is computationally well-posed regardless of the visual input. Model parameters can be globally adjusted, leading to a rich output behavior. Varying the smoothness strength, for example, can provide a continuous spectrum of motion estimates, ranging from normal to global optical flow. Unlike approaches that rely on the explicit matching of brightness edges in space or time, the applied gradient-based model assures spatiotemporal continuity on visual information. The non-linear coupling of the individual motion units improves the resulting optical flow estimate because it reduces spatial smoothing across large velocity differences. Extended measures of a 30x30 array prototype sensor under real-world conditions demonstrate the validity of the model and the robustness and functionality of the implementation.
-
Ph.D. Thesis
2004
Unsupervised Discovery of Extraction Patterns for InformationExtraction
Sudo, Kiyoshi
Abstract
|
PDF
Title: Unsupervised Discovery of Extraction Patterns for InformationExtraction
Candidate: Sudo, Kiyoshi
Advisor(s): Grishman, Ralph; Sekine, Satoshi
Abstract:
The task of Information Extraction (IE) is to find specific types of information in natural language text. In particular, *event extraction* identifies instances of a particular type of event or fact (a particular "scenario"), including the entities involved, and fills a database which has been pre-defined for the scenario. As the number of documents available on-line has multiplied, entity extraction has grown in importance for various applications, including tracking terrorist activities from newswire sources and building a database of job postings from the Web, to name a few.
Linguistic contexts, such as predicate-argument relationships, have been widely used as *extraction patterns* to identify the items to be extracted from the text. The cost of creating extraction patterns for each scenario has been a bottleneck limiting the portability of information extraction systems to different scenarios, although there has been some research on semi-supervised pattern discovery procedures to reduce this cost. The challenge is to develop a fully automatic method for identifying extraction patterns for a scenario specified by the user.
This dissertation presents a novel approach for the unsupervised discovery of extraction patterns for event extraction from raw text. First, we present a framework that allows the user to have a self-customizing information extraction system for his/her query: the Query-Driven Information Extraction (QDIE) framework. The input to the QDIE framework is the user's query: either a set of keywords or a narrative description of the event extraction task.
Second, we assess the improvement in extraction pattern models. By considering the shortcomings of the prior work based on predicate-argument models and their extensions, we propose a novel extraction pattern model that is based on arbitrary subtrees of dependency trees.
Third, we address the issue of portability across languages. As a case study of the QDIE framework, we implemented a pre-CODIE system, a Cross-Lingual On-Demand Information Extraction system requiring minimal human intervention, which incorporates the QDIE framework as a component for pattern discovery. In addition, we assess the role of machine translation in cross-lingual information extraction by comparing translation-based implementations.
-
TR2004-856
2004
Three-level BDDC in Two Dimensions
Tu, Xuemin
Abstract
|
PDF
Title: Three-level BDDC in Two Dimensions
Author(s): Tu, Xuemin
Abstract:
BDDC methods are nonoverlapping iterative substructuring domain decomposition methods for the solutions of large sparse linear algebraic systems arising from discretization of elliptic boundary value problems. They are similar to the balancing Neumann-Neumann algorithm. However, in BDDC methods, a small number of continuity constraints are enforced across the interface, and these constraints form a new coarse, global component. An important advantage of using such constraints is that the Schur complements that arise in the computation willa ll be strictly positive definite. The coarse problem is generated and factored by a direct solver at the beginning of the computation. However, this problem can ultimately become a bottleneck, if the number of subdomains is very large. In this paper, two three-level BDDC methods are introduced for solving the coarse problem approximately in two dimensional space, while still maintaining a good convergence rate. Estimates of the condition numbers are provided for the two three-level BDDC methods and numerical experiments are also discussed.
-
Ph.D. Thesis
2004
An Efficient and High-Order Accurate Boundary Integral Solver for the Stokes Equations in Three Dimensional Complex Geometries
Ying, Lexing
Abstract
|
PDF
Title: An Efficient and High-Order Accurate Boundary Integral Solver for the Stokes Equations in Three Dimensional Complex Geometries
Candidate: Ying, Lexing
Advisor(s): Zorin, Denis
Abstract:
This dissertation presents an efficient and high-order boundary integral solver for the Stokes equations in complex 3D geometries. The targeted applications of this solver are the flow problems in domains involving moving boundaries. In such problems, traditional finite element methods involving 3D unstructured mesh generation expe- rience difficulties. Our solver uses the indirect boundary integral formulation and discretizes the equation using the Nyström method.
Although our solver is designed for the Stokes equations, we show that it can be generalized to other constant coefficient elliptic partial differential equations (PDEs) with non-oscillatory kernels.
First, we present a new geometric representation of the domain boundary. This scheme takes quadrilateral control meshes with arbitrary geometry and topology as input, and produces smooth surfaces approximating the control meshes. Our surfaces are parameterized over several overlapping charts through explicit nonsingular C ∞ parameterizations, depend linearly on the control points, have fixed-size local support for basis functions, and have good visual quality.
Second, we describe a kernel independent fast multipole method (FMM) and its parallel implementation. The main feature of our algorithm is that it is based only on kernel evaluation and does not require the multipole expansions of the underlying kernel. We have tested our method on kernels from a wide range of elliptic PDEs. Our numerical results indicate that our method is efficient and accurate. Other ad- vantages include the simplicity of the implementation and its immediate extension to other elliptic PDE kernels. We also present an MPI based parallel implementation which scales well up to thousands of processors.
Third, we present a framework to evaluate the singular integrals in our solver. A singular integral is decomposed into a smooth far field part and a local part that contains the singularity. The smooth part of the integral is integrated using the trape- zoidal rule over overlapping charts, and the singular part is integrated in the polar coordinates which removes or decreases the order of singularity. We also describe a novel algorithm to integrate the nearly singular integrals coming from the evaluation at points close to the boundary.
-
Ph.D. Thesis
2004
High Performance Data Mining in Time Series: Techniques and Case Studies
Zhu, Yunyue
Abstract
|
PDF
Title: High Performance Data Mining in Time Series: Techniques and Case Studies
Candidate: Zhu, Yunyue
Advisor(s): Shasha, Dennis
Abstract:
Note: A significantly improved and expanded description of this material is available in the book High Performance Discovery in Time Series Springer Verlag 2004 ISBN 0-387-00857-8.
As extremely large time series data sets grow more prevalent in a wide variety of settings, we face the significant challenge of developing efficient analysis methods. This dissertation addresses the problem in designing fast, scalable algorithms for the analysis of time series.
The first part of this dissertation describes the framework for high performance time series data mining based on important primitives. Data reduction trasform such as the Discrete Fourier Transform, the Discrete Wavelet Transform, Singular Value Decomposition and Random Projection, can reduce the size of the data without substantial loss of information, therefore provides a synopsis of the data. Indexing methods organize data so that the time series data can be retrieved efficiently. Transformation on time series, such as shifting, scaling, time shifting, time scaling and dynamic time warping, facilitates the discovery of flexible patterns from time series.
The second part of this dissertation integrates the above primitives into useful applications ranging from music to physics to finance to medicine.
StatStream
StatStream is a system based on fast algorithms for finding the most highly correlated pairs of time series from among thousands of time series streams and doing so in a moving window fashion. It can be used to find correlations in time series in finance and in scientific applications.HumFinder
Most people hum rather poorly. Nevertheless, somehow people have some idea what we are humming when we hum. The goal of the query by humming program, HumFinder, is to make a computer do what a person can do. Using pitch translation, time dilation, and dynamic time warping, one can match an inaccurate hum to a melody remarkably accurately.OmniBurst
Burst detection is the activity of finding abnormal aggregates in data streams. Our software, OmniBurst, can detect bursts of varying durations. Our example applications are monitoring gamma rays and stock market price volatility. The software makes use of a shifted wavelet structure to create a linear time filter that can guarantee that no bursts will be missed at the same time that it guarantees (under a reasonable statistical model) that the filter eliminates nearly all false positives. -
TR2003-839
2003
A kernel-independent fast multipole algorithm
Biros, George;
Ying, Lexing; Zorin, Denis
Abstract
|
PDF
Title: A kernel-independent fast multipole algorithm
Author(s): Biros, George; Ying, Lexing; Zorin, Denis
Abstract:
Title: A kernel-independent fast multipole algorithm (NYU-CS-TR839) Authors: George Biros, Lexing Ying, and Denis Zorin Abstract: We present a new fast multipole method for particle simulations. The main feature of our algorithm is that is kernel independent, in the sense that no analytic expansions are used to represent the far field. Instead we use equivalent densities, which we compute by solving small Dirichlet-type boundary value problems. The translations from the sources to the induced potentials are accelerated by singular value decomposition in 2D and fast Fourier transforms in 3D. We have tested the new method on the single and double layer operators for the Laplacian, the modified Laplacian, the Stokes, the modified Stokes, the Navier, and the modified Navier operators in two and three dimensions. Our numerical results indicate that our method compares very well with the best known implementations of the analytic FMM method for both the Laplacian and modified Laplacian kernels. Its advantage is the (relative) simplicity of the implementation and its immediate extension to more general kernels.
-
TR2003-837
2003
An Embedded Boundary Integral Solver for the Stokes Equations
Biros, George;
Ying, Lexing; Zorin, Denis
Abstract
|
PDF
Title: An Embedded Boundary Integral Solver for the Stokes Equations
Author(s): Biros, George; Ying, Lexing; Zorin, Denis
Abstract:
We present a new method for the solution of the Stokes equations. Our goal is to develop a robust and scalable methodology for two and three dimensional, moving-boundary, flow simulations. Our method is based on Anita Mayo's method for the Poisson's equation: "The Fast Solution of Poisson's and the Biharmonic Equations on Irregular Regions", SIAM J. Num. Anal., 21 (1984), pp. 285--299. We embed the domain in a rectangular domain, for which fast solvers are available, and we impose the boundary conditions as interface (jump) conditions on the velocities and tractions. We use an indirect boundary integral formulation for the homogeneous Stokes equations to compute the jumps. The resulting integral equations are discretized by Nystrom's method. The rectangular domain problem is discretized by finite elements for a velocity-pressure formulation with equal order interpolation bilinear elements (Q1-Q1). Stabilization is used to circumvent the inf-sup condition for the pressure space. For the integral equations, fast matrix vector multiplications are achieved via a N log N algorithm based on a block representation of the discrete integral operator, combined with (kernel independent) singular value decomposition to sparsify low-rank blocks. Our code is built on top of PETSc, an MPI based parallel linear algebra library. The regular grid solver is a Krylov method (Conjugate Residuals) combined with an optimal two-level Schwartz-preconditioner. For the integral equation we use GMRES. We have tested our algorithm on several numerical examples and we have observed optimal convergence rates.
-
TR2003-835
2003
Survey: Eigenvector Analysis in Webpage Rankings
Chang, Hung-Hsien
Abstract
|
PDF
Title: Survey: Eigenvector Analysis in Webpage Rankings
Author(s): Chang, Hung-Hsien
Abstract:
Two major techniques have been proposed for using the structure of links in the World Wide Web to determine the relative significance of Web Pages. The PageRank algorithm \cite{BP98}, which is a critical part of the Google search engine, gives a single measure of importance of each page in the Web. The HITS algorithm \cite{K98} applies to a set of pages believed relevant to a given query, and assigns two values to each page: the degree to which the page is a hub and the degree to which it is an authority. Both algorithms have a natural interpretation in terms of a random walk over the set of pages involved, and in both cases the computation involved amounts to computing an eigenvector over the transition matrix for this random walk.
This paper surveys the literature discussing these two techniques and their variants, and their connection to random walks and eigenvector computation. It also discusses the stability of these techniques under small changes in the Web link structure.
-
TR2003-845
2003
Shrinkage-Based Similarity Metric for Cluster Analysis of Microarray Data
Cherepinsky, Vera;
Feng, Jiawu; Rejali, Marc; Mishra, Bud
Abstract
|
PDF
Title: Shrinkage-Based Similarity Metric for Cluster Analysis of Microarray Data
Author(s): Cherepinsky, Vera; Feng, Jiawu; Rejali, Marc; Mishra, Bud
Abstract:
The current standard correlation coefficient used in the analysis of microarray data, including gene expression arrays, was introduced in [1]. Its formulation is rather arbitrary. We give a mathematically rigorous derivation of the correlation coefficient of two gene expression vectors based on James-Stein Shrinkage estimators. We use the background assumptions described in [1], also taking into account the fact that the data can be treated as transformed into normal distributions. While [1] uses zero as an estimator for the expression vector mean μ, we start with the assumption that for each gene, μ is itself a zero-mean normal random variable (with a priori distribution N(0,τ 2)), and use Bayesian analysis to update that belief, to obtain a posteriori distribution of μ in terms of the data. The estimator for μ, obtained after shrinkage towards zero, differs from the mean of the data vectors and ultimately leads to a statistically robust estimator for correlation coefficients.
To evaluate the effectiveness of shrinkage, we conducted in silico experiments and also compared similarity metrics on a biological example using the data set from [1]. For the latter, we classified genes involved in the regulation of yeast cell-cycle functions by computing clusters based on various definitions of correlation coefficients, including the one using shrinkage, and contrasting them against clusters based on the activators known in the literature. In addition, we conducted an extensive computational analysis of the data from [1], empirically testing the performance of different values of the shrinkage factor γ and comparing them to the values of γ corresponding to the three metrics adressed here, namely, γ=0 for the Eisen metric, γ=1 for the Pearson correlation coefficient, and γ computed from the data for the Shrinkage metric.
The estimated "false-positives" and "false-negatives" from this study indicate the relative merits of clustering algorithms based on different statistical correlation coefficients as well as the sensitivity of the clustering algorithm to small perturbations in the correlation coefficients. These results indicate that using the shrinkage metric improves the accuracy of the analysis.
All derivation steps are described in detail; all mathematical assertions used in the derivation are proven in the appendix.
[1] Eisen, M.B., Spellman, P.T., Brown, P.O., and Botstein, D. (1998), PNAS USA 95, 14863-14868.
-
Ph.D. Thesis
2003
Comparing and Improving Centralized and Distributed Techniques for Coordinating Massively Parallel Shared-Memory Systems
Freudenthal, Eric
Abstract
|
PDF
Title: Comparing and Improving Centralized and Distributed Techniques for Coordinating Massively Parallel Shared-Memory Systems
Candidate: Freudenthal, Eric
Advisor(s): Gottlieb, Allan
Abstract:
Two complementary approaches have been proposed to achieve high performance inter-process coordination on highly parallel shared-memory systems. Gottlieb et. al. introduced the technique of combining concurrent memory references, thereby reducing hot spot contention and enabling the bottleneck-free execution of algorithms referencing a small number of shared variables. Mellor- Crummey and Scott introduced an alternative distributed local-spin technique that minimizes hot spot contention by not polling hotspot variables and exploiting the availability of processor-local shared memory. My principal contributions are a comparison of these two approaches, and significant improvements to the former.
The NYU Ultra3 prototype is the only system built that implements memory reference combining. My research utilizes micro-benchmark simulation studies of massively parallel Ultra3 systems executing coordination algorithms. This investigation detects problems in the Ultra3 design that result in higher-than-expected memory latency for reference patterns typical of busy-wait polling. This causes centralized coordination algorithms to perform poorly. Several architectural enhancements are described that significantly reduce the latency of these access patterns, thereby improving the performance of the centralized algorithms.
I investigate existing centralized algorithms for readers-writers and barrier coordination, all of which require fetch-and-add, and discovered variants that require fewer memory accesses (and hence have shorter latency). In addition,my evaluation includes novel algorithms that require only a restricted form of fetch-and-add.
Coordination latency of these algorithms executed on the enhanced combining architecture is compared to the latency of the distributed local-spin alternatives. These comparisons indicate that the distributed local-spin dissemination barrier, which generates no hot spot tra c, has latency slightly inferior to the best centralized algorithms investigated. However, for the less structured readers-writers problem, the centralized algorithms significantly outperform the distributed local-spin algorithm.
-
TR2003-849
2003
Comparing the Performance of Centralized and Distributed Coordination on Systems with Improved Combining Switches
Freudenthal, Eric;
Gottlieb, Allan
Abstract
|
PDF
Title: Comparing the Performance of Centralized and Distributed Coordination on Systems with Improved Combining Switches
Author(s): Freudenthal, Eric; Gottlieb, Allan
Abstract:
Memory system congestion due to serialization of hot spot accesses can adversely affect the performance of interprocess coordination algorithms. Hardware and software techniques have been proposed to reduce this congestion and thereby provide superior system performance. The combining networks of Gottlieb et al. automatically parallelize concurrent hot spot memory accesses, improving the performance of algorithms that poll a small number of shared variables. The widely used MCS distributed-spin algorithms take a software approach: they reduce hot spot congestion by polling only variables stored locally. Our investigations detected performance problems in existing designs for combining networks and we propose mechanisms that alleviate them. Simulation studies described herein indicate that a centralized readers writers algorithms executed on the improved combining networks have performance at least competitive to the MCS algorithms.
-
TR2003-848
2003
QTM: Trust Management with Quantified Stochastic Attributes
Freudenthal, Eric;
Karamcheti, Vijay
Abstract
|
PDF
Title: QTM: Trust Management with Quantified Stochastic Attributes
Author(s): Freudenthal, Eric; Karamcheti, Vijay
Abstract:
Trust management systems enable the construction of access-control infrastructures suitable for protecting sensitive resources from access by unauthorized agents. The state of the art in such systems (i) provide fail-safe in that access will be denied when authorizing credentials are revoked, (ii) can mitigate the risk of insider attacks using mechanisms for threshold authorization in which several independent partially trusted agents are required to co-sponsor sensitive activities, and (iii) are capable of enforcing intra- and inter- organizational access control policies.
Despite these advantages, trust management systems are limited in their ability to express partial trust. Additionally, they are cumbersome to administer when there are a large number of related access rights with differing trust (and thereby access) levels due to the need for explicit enumeration of the exponential number of agent combinations. More importantly, these systems have no provision for fault tolerance in cases where a primary authorization is lost (perhaps due to revocation), but others are available. Such situations may result in a cascading loss of access and possible interruption of service.
In this paper, we propose extending traditional trust management systems through a framework of reliability and confidence metrics. This framework naturally captures partial trust relationships, thereby reducing administrative complexity of access control systems with multiple related trust levels and increasing system availability in the presence of authorization faults while still maintaining equivalent safety properties.
-
Ph.D. Thesis
2003
Infrastructure Support for Accessing Network Services in Dynamic Network Environments
Fu, Xiaodong
Abstract
|
PDF
Title: Infrastructure Support for Accessing Network Services in Dynamic Network Environments
Candidate: Fu, Xiaodong
Advisor(s): Karamcheti, Vijay
Abstract:
Despite increases in network bandwidth, accessing network services across a wide area network still remains a challenging task. The difficulty mainly comes from the heterogeneous and constantly changing network environment, which usually causes undesirable user experience for network-oblivious applications.
A promising approach to address this is to provide network awareness in communication paths. While several such path-based infrastructures have been proposed, the network awareness provided by them is rather limited. Many challenging problems remain, in particular: (1) how to automatically create effective network paths whose performance is optimized for encountered network conditions, (2) how to dynamically reconfigure such paths when network conditions change, and (3) how to manage and distribute network resources among different paths and between different network regions. Furthermore, there is poor understanding of the benefits of using the path-based approach over other alternatives.
This dissertation describes solutions for these problems, built into a programmable network infrastructure called Composable Adaptive Network Services (CANS). The CANS infrastructure provides applications with network-aware communication paths that are automatically created and dynamically modified. CANS highlights four key mechanisms: (1) a high-level integrated type-based specification of components and network resources; (2) automatic path creation strategies; (3) system support for low-overhead path reconfiguration; and (4) distributed strategies for managing and allocating network resources.
We evaluate these mechanisms using experiments with typical applications running in the CANS infrastructure, and extensive simulation on a large scale network topology to compare with other alternatives. Experimental results validate the effectiveness of our approach, verifying that (1) the path-based approach provides the best and the most robust performance under a wide range of network configurations as compared to end-point or proxy-based alternatives; (2) automatic generation of network-aware paths is feasible and provides considerable performance advantages, requiring only minimal input from applications; (3) path reconfiguration strategies ensure continuous adaptation and provide desirable adaptation behaviors by using automatically generated paths; (4) both run-time overhead and reconfiguration time of CANS paths are negligible for most applications; (5) the resource management and allocation strategies allow effective setting up shared resource pools in the network and sharing resources among paths.
-
TR2003-843
2003
Why Path-Based Adaptation? Performance Implications of Different Adaptation Mechanisms for Network Content Delivery
Fu, Xiaodong;
Karamcheti, Vijay
Abstract
|
PDF
Title: Why Path-Based Adaptation? Performance Implications of Different Adaptation Mechanisms for Network Content Delivery
Author(s): Fu, Xiaodong; Karamcheti, Vijay
Abstract:
Because of heterogeneous and dynamic changing network environments, content delivery across the network requires system support for coping with different network conditions in order to provide satisfactory user experiences. Despite the existence of many adaptation frameworks, the question that which adaptation approach performs the best under what network configurations still remains unanswered. The performance implication of different adaptation approaches (end-point, proxy-based and path-based approaches) has not been studied yet. This paper aims to address this shortcoming by conducting a series simulation-based experiments to compare performance among these adaptation approaches under different network configurations. In order to make a fair comparison, in this paper approach-neutral strategies are proposed for constructing communication paths and managing network resources. The experiment results show that there are well-defined network environments under which each of these approaches delivers its best performance; and among them, the path-based approach, which uses the whole communication path to do adaptation, provides the best and the most robust performance under different network configurations, and for different types of servers and clients.
-
TR2003-847
2003
Balancing Neumann-Neumann Preconditioners for the Mixed Formulation of Almost-Incompressible Linear Elasticity
Goldfeld, Paulo
Abstract
|
PDF
Title: Balancing Neumann-Neumann Preconditioners for the Mixed Formulation of Almost-Incompressible Linear Elasticity
Author(s): Goldfeld, Paulo
Abstract:
Balancing Neumann-Neumann methods are extended to the equations arising from the mixed formulation of almost-incompressible linear elasticity problems discretized with discontinuous-pressure finite elements. This family of domain decomposition algorithms has previously been shown to be effective for large finite element approximations of positive definite elliptic problems. Our methods are proved to be scalable and to depend weakly on the size of the local problems. Our work is an extension of previous work by Pavarino and Widlund on BNN methods for Stokes equation.
Our iterative substructuring methods are based on the partition of the unknowns into interior ones - including interior displacements and pressures with zero average on every subdomain - and interface ones - displacements on the geometric interface and constant-by-subdomain pressures. The restriction of the problem to the interior degrees of freedom is then a collection of decoupled local problems that are well-posed even in the incompressible limit. The interior variables are eliminated and a hybrid preconditioner of BNN type is designed for the Schur complement problem. The iterates are restricted to a benign subspace, on which the preconditioned operator is positive definite, allowing for the use of conjugate gradient methods.
A complete convergence analysis of the method is presented for the constant coefficient case. The algorithm is extended to handle discontinuous coefficients, but a full analysis is not provided. Extensions of the algorithm and of the analysis are also presented for problems combining pure-displacement and mixed finite elements in different subregions. An algorithm is also proposed for problems with continuous discrete pressure spaces.
All the algorithms discussed have been implemented in parallel codes that have been successfully tested on large sample problems on large parallel computers; results are presented and discussed. Implementations issues are also discussed, including a version of our main algorithm that does not require the solution of any auxiliary saddle-point problem since all subproblems of the preconditioner can be reduced to solving symmetric positive definite linear systems.
-
Ph.D. Thesis
2003
Enriched Content: Concept, Architecture, Implementation, and Applications
Hung-Hsien, Chang
Abstract
|
PDF
Title: Enriched Content: Concept, Architecture, Implementation, and Applications
Candidate: Hung-Hsien, Chang
Advisor(s): Perlin, Ken
Abstract:
Since the debut of the World Wide Web, Web users have been facing the following problems:
[Extended Semantics]
When we read or study a digital document that we wish to explore further, typically, we interrupt our work to start a search. It costs time.[Reverse Hyperlink]
When we visit a web page, we might be curious about what other hyperlinks point to the visited page. These links would most likely be of related interest. Can we get ``real time'' information about what other pages are pointing to this page?[Version Control]
Many of us have been frustrated and even annoyed when the hyperlink that we follow gives us a ``404 not found'' or the retrieved webpage content is entirely different from the one we have bookmarked. Could we also have access to the past versions even if the hyperlink has been removed or the content has been changed?[Composition Assistant]
Writing is not an easy task. We labor to structure a body of text, sort out ideas, find materials, and digest information. We would like an automated service that can associate the content we have produced with other contexts(on the Web) and bring these web contexts to us for reference.In this thesis, we provide a unified framework and architecture, named enriched content, to resolve the above problems. We apply the architecture and show how the enriched content can be used in each application. We demonstrate that this method can be a new way of writing add-on functions for various document applications without having to write individual plug-in for each application or re-write each application. We also briefly discuss possible future development.
-
TR2003-836
2003
AQuery: Query Language for Ordered Data, Optimization Techniques, and Experiments
Lerner, Alberto;
Shasha, Dennis
Abstract
|
PDF
Title: AQuery: Query Language for Ordered Data, Optimization Techniques, and Experiments
Author(s): Lerner, Alberto; Shasha, Dennis
Abstract:
An order-dependent query is one whose result (interpreted as a multi-set) changes if the order of the input records is changed. In a stock-quotes database, for instance, retrieving all quotes concerning a given stock for a given day does not depend on order, because the collection of quotes does not depend on order. By contrast, finding the five price moving average in a trade table gives a result that depends on the order of the table. Query languages based on the relational data model can handle order-dependent queries only through add-ons. SQL:1999, for example, permits the use of a data ordering mechanism called a "window" in limited parts of a query. As a result, order-dependent queries become difficult to write in those languages and optimization techniques for these features, applied as pre- or post-enumerating phases, are generally crude. The goal of this paper is to show that when order is a property of the underlying data model and algebra, writing order-dependent queries in a language can be as natural as is their optimization. We introduce AQuery, an SQL-like query language and algebra that has from-the-ground-up support for order. We also present a framework for optimization of the order-dependent queries catagories it expresses. The framework is able to take advantage of the large body of query transformations on relational systems while incorporating new ones described here. We show by experiment that the resulting system is orders of magnitude faster than current SQL:1999 systems on many natural order-dependent queries.
-
TR2003-841
2003
Secure Untrusted Data Repository (SUNDR)
Li, Jinyuan;
Krohn, Maxwell; Mazieres, David; Shasha, Dennis
Abstract
|
PDF
Title: Secure Untrusted Data Repository (SUNDR)
Author(s): Li, Jinyuan; Krohn, Maxwell; Mazieres, David; Shasha, Dennis
Abstract:
We have implemented a secure network file system called SUNDR that guarantees the integrity of data even when malicious parties control the server. SUNDR splits storage functionality between two untrusted components, a block store and a consistency server. The block store holds all file data and most metadata. Without interpreting metadata, it presents a simple interface for clients to store variable-sized data blocks and later retrieve them by cryptographic hash.
The consistency server implements a novel protocol that guarantees close-to-open consistency whenever users see each other's updates. The protocol roughly consists of users exchanging version-stamped digital signatures of block server metadata, though a number of subtleties arise in efficiently supporting concurrent clients and group-writable files. We have proven the protocol's security under basic cryptographic assumptions. Without somehow producing signed messages valid under a user's (or the superuser's) public key, an attacker cannot tamper with a user's files---even given control of the servers and network. Despite this guarantee, SUNDR performs within a reasonable factor of existing insecure network file systems.
-
Ph.D. Thesis
2003
A framework for optimistic program optimization
Pechtchanski, Igor
Abstract
|
PDF
Title: A framework for optimistic program optimization
Candidate: Pechtchanski, Igor
Advisor(s): Goldberg, Benjamin
Abstract:
The problem of program optimization is a non-trivial one. Compilers do a fair job, but can't always deliver the best performance. The expressibility of general-purpose languages is limited, not allowing programmers to describe expected run time behavior, for example, and some programs are thus more amenable to optimization than others, depending on what the compiler expects to see. We present a generic framework that allows addressing this problem in two ways: through specifying verifiable source annotations to guide compiler analyses, and through optimistically using some assumptions and analysis results for the subset of the program seen so far. Two novel applications are presented, one for each of the above approaches: a dynamic optimistic interprocedural type analysis algorithm, and a mechanism for specifying immutability assertions. Both applications result in measurable speedups, demonstrating the feasibility of each approach.
-
TR2003-840
2003
Robust Model-Free Tracking of Non-Rigid Shape
Torresani, Lorenzo;
Hertzmann, Aaron; Bregler, Christoph
Abstract
|
PDF
Title: Robust Model-Free Tracking of Non-Rigid Shape
Author(s): Torresani, Lorenzo; Hertzmann, Aaron; Bregler, Christoph
Abstract:
We present a robust algorithm for estimating non-rigid motion in video sequences. We build on recent methods for tracking video by enforcing global structure (such as rank constraints) on the tracking. These methods assume color constancy in the neighborhood of each tracked feature, an assumption that is violated by occlusions, deformations, lighting changes, and other effects. Our method identifies outliers while solving for flow. This allows us to obtain high-quality tracking from difficult sequences, even when there is no single "reference frame" in which all tracks are visible.
-
Ph.D. Thesis
2003
Secure and Robust Censorship-Resistant Publishing Systems
Waldman, Marc
Abstract
|
PDF
Title: Secure and Robust Censorship-Resistant Publishing Systems
Candidate: Waldman, Marc
Advisor(s): Mazieres, David
Abstract:
In many cases, censoring documents on the Internet is a fairly simple task. Almost any published document can be traced back to a specific host, and from there to an individual responsible for the material. Someone wishing to censor this material can use the courts, threats, or other means of intimidation to compel the relevant parties to delete the material or remove the host from the network. Even if these methods prove unsuccessful, various denial of service attacks can be launched against a host to make the document difficult or impossible to retrieve. Unless a host's operator has a strong interest in preserving a particular document, removing it is often the easiest course of action.
A censorship-resistant publishing system allows an individual to publish a document in such a way that it is difficult, if not impossible, for an adversary to completely remove, or convincingly alter, a published document. One useful technique for ensuring document availability is to replicate the document widely on servers located throughout the world. However, replication alone does not block censorship. Replicas need to be protected from accidental or malicious corruption. In addition, a censorship-resistant publishing system needs to address a number of other important issues, including protecting the publisher's identity while simultaneously preventing storage flooding attacks by anonymous users.
This dissertation presents the design and implementation of two very different censorship-resistant publishing systems. The first system, Publius, is a web based system that allows an individual to publish, update, delete and retrieve documents in a secure manner. Publius's main contributions include an automatic tamper checking mechanism, a method for updating or deleting anonymously published content and methods for publishing anonymously hyperlinked content. The second system, Tangler, is a peer-to-peer based system whose contributions include a unique publication mechanism and a dynamic self-policing network. The benefits of this new publication mechanism include the automatic replication of previously published content and an incentive to audit the reliability with which servers store content published by other people. In part through these incentives, the self-policing network identifies and ejects servers that exhibit faulty behavior.
-
TR2003-846
2003
Improved Link-Based Algorithms for Ranking Web Pages
Wang, Ziyang
Abstract
|
PDF
Title: Improved Link-Based Algorithms for Ranking Web Pages
Author(s): Wang, Ziyang
Abstract:
Several link-based algorithms, such as PageRank [19], HITS [15] and SALSA [16], have been developed to evaluate the popularity of web pages. These algorithms can be interpreted as computing the steady-state distribution of various Markov processes over web pages. The PageRank and HITS algorithms tend to over-rank tightly interlinked collections of pages, such as well-organized message boards. We show that this effect can be alleviated using a number of modications to the underlying Markov process. Specically, rather than weight all outlinks from a given page equally, greater weight is given to links between pages that are, in other respects, further off in the web, and less weight is given to links between pages that are nearby. We have experimented with a number of variants of this idea, using a number of different measures of ``distance'' in the Web, and a number different weighting schemes. We show that these revised algorithms often do avoid the over-ranking problem and give better overall rankings.
-
Ph.D. Thesis
2003
A Qualitative Profile-based Approach to Edge Detection
Yen, Ting-jen
Abstract
|
PDF
Title: A Qualitative Profile-based Approach to Edge Detection
Candidate: Yen, Ting-jen
Advisor(s): Yap, Chee
Abstract:
Edge detection is a fundamental problem of computer vision and has been widely investigated. We propose a new framework for edge detection based on edge profiles.
Our model, based on one-dimensional qualitative edge profile fitting and edge consistency, will produce one continuous edge from an initial seed point. A "profile" is defined as a finite cross-section of a two-dimensional image along a line segment. "Edge consistency" means that all the profiles on the same edge should be consistent.
Appropriate evaluation functions are needed for different types of edge profiles, such as step edges, ramp edges, etc. An evaluation function must meet the requirement that it will produce local minima at the positions where edges of a given type occurs in the profile. Instead of subjective thresholding, image noise is measured statistically and used as a systematic way of filtering false edges. We describe our method as "qualitative edge profile fitting" because it is not based on arbitrary thresolding. Once an edge point is localized, it can be extended into an edge by matching compatible profiles. Two profiles are considered compatible as long as their average di erence is within the noise measurement. Another feature of our approach is its subpixel accuracy. The utilization of profiles and noise-induced threshold selection make tasks such as joining broken edges more objective.
We develop the necessary algorithms and implement them. Different evaluation functions are constructed for different edge models and experimented on different one-dimensional profiles. The edge detector, using these evaluation functions, is then examined using different images and under different noise conditions.
-
TR2003-842
2003
A Distributed Adaptive Cache Update Algorithm for Dynamic Source Routing
Yu, Xin;
Kedem, Zvi M.
Abstract
|
PDF
Title: A Distributed Adaptive Cache Update Algorithm for Dynamic Source Routing
Author(s): Yu, Xin; Kedem, Zvi M.
Abstract:
On-demand routing protocols use route caches to make routing decisions. Due to frequent topology changes, cached routes easily become stale. To address the cache staleness issue in DSR (the Dynamic Source Routing protocol), prior work mainly used heuristics with ad hoc parameters to predict the lifetime of a link or a route. However, heuristics cannot accurately predict timeouts because topology changes are unpredictable. In this paper, we present a novel distributed cache update algorithm to make route caches adapt quickly to topology changes without using ad hoc parameters. We define a new cache structure called a cache table to maintain the information necessary for cache updates. When a node detects a link failure, our algorithm proactively notifies all reachable nodes that have cached the broken link in a distributed manner. We compare our algorithm with DSR with path caches and with Link-MaxLife through detailed simulations. We show that our algorithm significantly outperforms DSR with path caches and with Link-MaxLife.
-
Ph.D. Thesis
2002
Expert-Driven Validation of Set-Based Data Mining Results
Adomavicius, Gediminas
Abstract
|
PDF
Title: Expert-Driven Validation of Set-Based Data Mining Results
Candidate: Adomavicius, Gediminas
Advisor(s): Tuzhilin, Alexander; Davis, Ernest
Abstract:
This dissertation addresses the problem of dealing with large numbers of set-based patterns, such as association rules and itemsets, discovered by data mining algorithms. Since many discovered patterns may be spurious, irrelevant, or trivial, one of the main problems is how to validate them, e.g., how to separate the ``good'' rules from the ``bad.'' Many researchers have advocated the explicit involvement of a human expert in the validation process. However, scalability becomes an issue when large numbers of patterns are discovered, since the expert cannot perform the validation on a pattern-by-pattern basis in a reasonable period of time. To address this problem, this dissertation describes a new expert-driven approach to set-based pattern validation.
The proposed validation approach is based on validation sequences, i.e., we rely on the expert's ability to iteratively apply various validation operators that can validate multiple patterns at a time, thus making the expert-based validation feasible. We identified the class of scalable set predicates called cardinality predicates and demonstrated how these predicates can be effectively used in the validation process, i.e., as a basis for validation operators. We examined various properties of cardinality predicates, including their expressiveness. We also have developed and implemented the set validation language (SVL) that can be used for manual specification of cardinality predicates by a domain expert. In addition, we have proposed and developed a scalable algorithm for set and rule grouping that can be used to generate cardinality predicates automatically.
The dissertation also explores various theoretical properties of sequences of validation operators and facilitates a better understanding of the validation process. We have also addressed the problem of finding optimal validation sequences and have shown that certain formulations of this problem are NP-complete. In addition, we provided some heuristics for addressing this problem.
Finally, we have tested our rule validation approach on several real-life applications, including personalization and bioinformatics applications.
-
Ph.D. Thesis
2002
Responsive Thinwire Visualization of Large Geographic Datasets
Been, Kenneth
Abstract
|
PDF
Title: Responsive Thinwire Visualization of Large Geographic Datasets
Candidate: Been, Kenneth
Advisor(s): Yap, Chee
Abstract:
This thesis describes a web-based, responsive, zooming and panning visual- ization system for a full-featured geographic description of the United States. Current web-based map servers provide, from a visualization standpoint, little more than one static image per page, with hyperlinks for navigation; continuous zooming and panning requires locally stored data. Our primary contribution is a multi-threaded, scalable and responsive client-server architecture that responds to user requests as naturally and quickly as possible, regardless of network band- width reliability. This architecture can be generalized for use in other applica- tions, including non-geographic ones. To this we add a scalable and exible user interface for navigation of multi-scale geographic data, with intuitive zooming and panning, pop-up feature labels, and a user controlled tree-hierarchy of windows. We build software tools and algorithms for translating the U.S. Census Bureau's TIGER data into a format designed for speedy database retrieval and network delivery, and for generalizing the data into multiple levels of detail. Because of anomalies in the TIGER data, this processing requires some human intervention.
-
Ph.D. Thesis
2002
Representing and Modifying Complex Surfaces
Biermann, Henning
Abstract
|
PDF
Title: Representing and Modifying Complex Surfaces
Candidate: Biermann, Henning
Advisor(s): Zorin, Denis
Abstract:
The increasing demand for highly detailed geometric models poses new and important problems in computer graphics and geometric modeling. Applications for complex models range from geometric design and scientific simulations to feature movies and video games.
We focus on the fundamental problem of creating and manipulating complex surface models. We address the problem by designing an efficient and general surface representation, and develop algorithms for efficient modification of surfaces represented in this form. Our surface representation extends existing subdivision-based representations with explicit representation of sharp features and boundaries, which is crucial in many computer-aided design applications.
We consider two types of surface modifications: boolean operations on solids bounded by surfaces, and surface pasting. Our technique rapidly and robustly computes an approximate result rather than aiming for the precise solution. At the same time, our approach allows one to trade speed for accuracy, and, in most cases, compute the result with any desired accuracy. The second type of editing operations we consider address the problem of transferring geometric features between different objects. Our technique makes it easy to combine geometric data from various sources (e.g. 3D scanning, CAGD model) into a single model.
-
TR2003-838
2002
An Embedded Boundary Integral Solver for the Unsteady Incompressible Navier-Stokes Equations
Biros, George;
Ying, Lexing; Zorin, Denis
Abstract
|
PDF
Title: An Embedded Boundary Integral Solver for the Unsteady Incompressible Navier-Stokes Equations
Author(s): Biros, George; Ying, Lexing; Zorin, Denis
Abstract:
We present a new method for the solution of the unsteady incompressible Navier-Stokes equations. Our goal is to achieve a robust and scalable methodology for two and three dimensional incompressible laminar flows. The Navier-Stokes operator discretization is done using boundary integrals and structured-grid finite elements. We use a two-step second-order accurate scheme to advance the equations in time. The convective term is discretized by an explicit, but unconditionally stable, semi-Lagrangian formulation; at each time step we inverta spatial constant-coefficient (modified) Stokes operator. The Dirichlet problem for the modified Stokes operator is formulated as a double-layer boundary integral equation. Domain integrals are computed via finite elements with appropriate forcing singularities to account for the irregular geometry. We use a velocity-pressure formulation which we discretize with bilinear elements (Q1-Q1), which give equal order interpolation for the velocities and pressures. Stabilization is used to circumvent the div-stability condition for the pressure space. The integral equations are discretized by Nystrom's method. For the specific approximation choices the method is second order accurate. We will present numerical results and discuss the performance and scalability of the method in two dimensions.
-
Ph.D. Thesis
2002
On computing the Pareto-optimal solution set in a large scale dynamic network
Daruwala, Raoul-Sam
Abstract
|
PDF
Title: On computing the Pareto-optimal solution set in a large scale dynamic network
Candidate: Daruwala, Raoul-Sam
Advisor(s): Mishra, Bud
Abstract:
Let G=(V,E) be a graph with time-dependent edges where the cost of a path p through the graph is determined by a vector functions F(p)=[f_1(p),f_2(p), \dots, f_n(p)], where f_1,f_2,...,f_n are independent objective functions. Where n>1 there is no clear idea of what a ``best'' solution is, instead we turn to the idea of Pareto-optimality to define the efficiency of a path. Given the set of paths P through the network, a path p' is Pareto-optimal if for every p in P for all the objective functions (f_i(p) >= f_i(p')).
The problem of planning itineraries on a transportation system involves computing the set of optimal paths through a time-dependent network where the cost of a path is determined by more than one, possibly non-linear and non-additive, cost function. This thesis introduces an algorithmic toolkit for finding the set of Pareto-optimal paths in time-dependent networks in the presence of multiple objective functions.
Multi-criteria path optimization problems are known to be NP-Hard, however, by exploiting geometric and periodic properties of the dynamic graphs that model transit networks we show that it is possible to compute the Pareto-optimal solutions sets rapidly without using heuristics. We show that we can solve the itinerary problem in the presence of response time constraints for a large scale graph.
-
TR2002-824
2002
Adaptive Service Access in Shared Wireless Environments
Fu, Xiaodong;
Karamcheti, Vijay
Abstract
|
PDF
Title: Adaptive Service Access in Shared Wireless Environments
Author(s): Fu, Xiaodong; Karamcheti, Vijay
Abstract:
Adaptation to network changes is important to provide applications with seamless service access in a shared wireless environment. Path-based mechanisms, which augment data paths with application-specific ``bridging'' components guided by minimal application input, are promising approaches for providing such support. Although shown to be successful in static network situations, their utility under dynamically changing network conditions has not been well-studied.
In this paper, we answer this question by investigating the performance of a path-based approach, CANS (Composable Adaptive Network Services) in a dynamic environment. We find that the suitability of CANS-like approaches is hampered by inaccurate component models and expensive planning and reconfiguration. We address these problems by extending CANS to support (1) generalized path creation strategies to match different application performance preferences; (2) refined component models that enable adjustment at a finer granularity and more accurately represent behavior of component compositions; and (3) local planning and reconfiguration mechanisms that improve responsiveness. We present the problems and evaluate our solutions using an image streaming application. The experiment results show that our solutions are effective.
-
TR2002-825
2002
Balancing Neumann-Neumann Preconditioners for Mixed Approximations of Heterogeneous Problems in Linear Elasticity
Goldfeld, Paulo;
Pavarino, Luca F.; Widlund, Olof B.
Abstract
|
PDF
Title: Balancing Neumann-Neumann Preconditioners for Mixed Approximations of Heterogeneous Problems in Linear Elasticity
Author(s): Goldfeld, Paulo; Pavarino, Luca F.; Widlund, Olof B.
Abstract:
Balancing Neumann-Neumann methods are extented to mixed formulations of the linear elasticity system with discontinuous coefficients, discretized with mixed finite or spectral elements with discontinuous pressures.
These domain decomposition methods implicitly eliminate the degrees of freedom associated with the interior of each subdomain and solve iteratively the resulting saddle point Schur complement using a hybrid preconditioner based on a coarse mixed elasticity problem and local mixed elasticity problems with natural and essential boundary conditions. A polylogarithmic bound in the local number of degrees of freedom is proven for the condition number of the preconditioned operator in the constant coefficient case.
Parallel and serial numerical experiments confirm the theoretical results, indicate that they still hold for systems with discontinuous coefficients, and show that our algorithm is scalable, parallel, and robust with respect to material heterogeneities. The results on heterogeneous general problems are also supported in part by our theory.
-
TR2002-834
2002
Overlapping Schwarz Preconditioners for Spectral Nedelec Elements for a Model Problem in H(curl)
Hientzsch, Bernhard
Abstract
|
PDF
Title: Overlapping Schwarz Preconditioners for Spectral Nedelec Elements for a Model Problem in H(curl)
Author(s): Hientzsch, Bernhard
Abstract:
A two-level overlapping domain decomposition method is analyzed for a Nedelec spectral element approximation of a model problem appearing in the solution of Maxwell's equations. The overlap between subdomains can consist of entire spectral elements or rectangular subsets of spectral elements. For fixed relative overlap and overlap made from entire elements, the condition number of the method is bounded, independently of the mesh size, the number of subregions, the coefficients and the degree of the spectral elements. In the case of overlap including just parts of spectral elements, a bound linear in the degree of the elements is proven. It is assumed that the coarse and fine mesh are quasi-uniform and shape-regular and that the domain is convex. Arguments that would not require quasi-uniformity of the coarse mesh and convexity of the domain are mentioned. Our work generalizes results obtained for lower-order Nedelec elements in Toselli [Numerische Mathematik (2000) 86:733-752]. Numerical results for the two-level algorithm in two dimensions are also presented, supporting our analysis.
-
TR2002-828
2002
Dual-Primal FETI Methods for Incompressible Stokes and Linearized Navier-Stokes Equations
Li, Jing
Abstract
|
PDF
Title: Dual-Primal FETI Methods for Incompressible Stokes and Linearized Navier-Stokes Equations
Author(s): Li, Jing
Abstract:
In this paper, a dual-primal FETI method is developed for solving incompressible Stokes equations approximated by mixed finite elements with discontinuous pressures in three dimensions. The domain of the problem is decomposed into non-overlapping subdomains, and the continuity of the velocity across the subdomain interface is enforced by introducing Lagrange multipliers. By a Schur complement procedure, the indefinite Stokes problem is reduced to a symmetric positive definite problem for the dual variables, i.e., the Lagrange multipliers. This dual problem is solved by a Krylov space method with a Dirichlet preconditioner. At each step of the iteration, both subdomain problems and a coarse problem on a coarse subdomain mesh are solved by a direct method. It is proved that the condition number of this preconditioned dual problem is independent of the number of subdomains and bounded from above by the product of the inverse of the inf-sup constant of the discrete problem and the square of the logarithm of the number of unknowns in the individual subdomain problems. Illustrative numerical results are presented by solving lid driven cavity problems. This algorithm is also extended to solving linearized non-symmetric Navier-Stokes equation.
-
TR2002-830
2002
Dual-Primal FETI Methods for Stationary Stokes and Navier-Stokes Equations
Li, Jing
Abstract
|
PDF
Title: Dual-Primal FETI Methods for Stationary Stokes and Navier-Stokes Equations
Author(s): Li, Jing
Abstract:
Finite element tearing and interconnecting (FETI) type domain decomposition methods are first extended to solving incompressible Stokes equations. One-level, two-level, and dual-primal FETI algorithms are proposed. Numerical experiments show that these FETI type algorithms are scalable, i.e., the number of iterations is independent of the number of subregions into which the given domain is subdivided. A convergence analysis is then given for dual-primal FETI algorithms both in two and three dimensions.
Extension to solving linearized nonsymmetric stationary Navier-Stokes equations is also discussed. The resulting linear system is no longer symmetric and a GMRES method is used to solve the preconditioned linear system. Eigenvalue estimates show that, for small Reynolds number, the nonsymmetric preconditioned linear system is a small perturbation of that in the symmetric case. Numerical experiments also show that, for small Reynolds number, the convergence of GMRES method is similar to the convergence of solving symmetric Stokes equations with the conjugate gradient method. The convergence of GMRES method depends on the Reynolds number; the larger the Reynolds number, the slower the convergence.
Dual-primal FETI algorithms are further extended to nonlinear stationary Navier-Stokes equations, which are solved by using a Picard iteration. In each iteration step, a linearized Navier-Stokes equation is solved by using a dual-primal FETI algorithm. Numerical experiments indicate that convergence of the Picard iteration depends on the Reynolds number, but is independent of both the number of subdomains and the subdomain problem size.
-
TR2002-832
2002
Efficiently Distributing Component-based Applications Across Wide-Area Environments
Llambiri, Deni;
Totok, Alexander; Karamcheti, Vijay
Abstract
|
PDF
Title: Efficiently Distributing Component-based Applications Across Wide-Area Environments
Author(s): Llambiri, Deni; Totok, Alexander; Karamcheti, Vijay
Abstract:
Distribution and replication of network-accessible applications has been shown to be an effective approach for delivering improved Quality of Service (QoS) to end users. An orthogonal trend seen in current-day network services is the use of component-based frameworks. Even though such component-based applications are natural candidates for distributed deployment, it is unclear if the design patterns underlying component frameworks also enable efficient service distribution in wide-area environments. In this paper, we investigate application design rules and their accompanying system-level support essential to a beneficial and efficient service distribution process. Our study targets the widely used Java 2 Enterprise Edition (J2EE) component platform and two sample component-based applications: Java Pet Store and RUBiS. Our results present strong experimental evidence that component-based applications can be efficiently distributed in wide-area environments, significantly improving QoS delivered to end users as compared to a centralized solution. Although current design patterns underlying component frameworks are not always suitable, we identify a small set of design rules for orchestrating interactions and managing component state that together enable efficient distribution. Futhermore, we show how enforcement of the identified design rules and automation of pattern implementation can be supported by container frameworks.
-
TR2002-833
2002
Online Codes
Maymounkov, Petar
Abstract
|
PDF
Title: Online Codes
Author(s): Maymounkov, Petar
Abstract:
We introduce online codes - a class of near-optimal codes for a very general loss channel which we call the free channel. Online codes are linear encoding / decoding time codes, based on sparse bipartite graphs, similar to Tornado codes, with a couple of novel properties: local encodability and rateless-ness. Local encodability is the property that each block of the encoding of a message can be computed independently from the others in constant time. This also implies that each encoding block is only dependent on a constant-sized part of the message and a few preprocessed bits. Rateless-ness is the property that each message has an encoding of practically infinite size.
We argue that rateless codes are more appropriate than fixed-rate codes for most situations where erasure codes were considered a solution. Furthermore, rateless codes meet new areas of application, where they are not replaceable by fixed-rate codes. One such area is information dispersal over peer-to-peer networks.
-
TR2002-826
2002
Building Secure File Systems Out of Byzantine Storage
Mazieres, David;
Shasha, Dennis
Abstract
|
PDF
Title: Building Secure File Systems Out of Byzantine Storage
Author(s): Mazieres, David; Shasha, Dennis
Abstract:
This paper shows how to implement a trusted network file system on an untrusted server. While cryptographic storage techniques exist that allow users to keep data secret from untrusted servers, this work concentrates on the detection of tampering attacks and stale data. Ideally, users of an untrusted storage server would immediately and unconditionally notice any misbehavior on the part of the server. This ideal is unfortunately not achievable. However, we define a notion of data integrity called fork consistency in which, if the server delays just one user from seeing even a single change by another, the two users will never again see one another's changes - a failure easily detectable with on-line communication. We give a practical protocol for a multi-user network file system called SUNDR, and provfe that SUNDR offers fork consistency whether or not the server obeys the protocol.
-
TR2002-831
2002
Image Denoising using a Gaussian Scale Mixture in the Wavelet Domain
Portilla, Javier;
Strela, Vasily; Wainwright, Martin J.; Simoncelli, Eero P.
Abstract
|
PDF
Title: Image Denoising using a Gaussian Scale Mixture in the Wavelet Domain
Author(s): Portilla, Javier; Strela, Vasily; Wainwright, Martin J.; Simoncelli, Eero P.
Abstract:
We describe a method for removing noise from digital images, based on a statistical model of the coefficients of an overcomplete multi-scale oriented basis. Neighborhoods of coefficients at adjacent positions and scales are modeled as the product of two independent random variables: a Gaussian vector and a hidden positive scalar multiplier. The latter modulates the local variance of the coefficients in the neighborhood, and is thus able to account for the empirically observed correlation between the amplitudes of pyramid coefficients. Under this model, the Bayesian least squares estimate of each coefficient reduces to a weighted average of the local linear (Wiener) estimate over all possible values of the hidden multiplier variable. We demonstrate through simulations with images contaminated by additive Gaussian noise of known covariance that the performance of this method substantially surpasses that of previously published methods, both visually and in terms of mean squared error. In addition, we demonstrate the performance of the algorithm in removing sensor noise from high-ISO digital camera images.
-
Ph.D. Thesis
2002
Informative Features in Vision and Learning
Rudra, Archisman
Abstract
|
PDF
Title: Informative Features in Vision and Learning
Candidate: Rudra, Archisman
Advisor(s): Geiger, Davi
Abstract:
We explore the role of features in solving problems in computer vision and learning. Features captures important domain-dependent knowledge and are fundamental in simplifying problems. Our goal is to consider the universal features of the problem concerned, and not just particular algorithms used in its solution. Such an approach reveals only the fundamental difficulties of any problem. For most problems we will face a host of other specialized concerns. Therefore, we consider simplified problems which captures the essence of our approach.
This thesis consists of two parts. First, we explore means of discovering features. We come up with an information theoretic criterion to identify features which has deep connections to statistical estimation theory. We consider features to be ``nice'' representations of objects. We find that, ideally, a feature space representation of on image is the most concise representation of an image which captures all available information in it. In practice, however, we are satisfied with an approximation to it. Therefore, we explore a few such approximations and explain their connection to the information-theoretic approach. We look at the algorithms which implement these approximation and look at their generalizations in the related field of stereo vision.
Using features, whether they come from some feature-discovery algorithm or are hand crafted, is usually an ad hoc process which depends on the actual problem, and the exact representation of features. This diversity mostly arises from the multitude of ways features capture information. In the second part of this thesis, we come up with an architecture which lets us use features in a very flexible way, in the context of content-addressable memories. We apply this approach to two radically different domains, face images and English words. We also look at human performance in reconstructing words from fragments, which give us some information about the memory subsystem in human beings.
-
TR2002-829
2002
Workload Characterization of a Personalized Web Site - And Its Implications for Dynamic Content Caching
Shi, Weisong;
Wright, Randy; Collins, Eli; Karamcheti, Vijay
Abstract
|
PDF
Title: Workload Characterization of a Personalized Web Site - And Its Implications for Dynamic Content Caching
Author(s): Shi, Weisong; Wright, Randy; Collins, Eli; Karamcheti, Vijay
Abstract:
Requests for dynamic and personalized content increasingly dominate current-day Internet traffic; however, traditional caching architectures are not well-suited to cache such content. Several recently proposed techniques, which exploit reuse at the sub-document level, promise to add this shortcoming, but require a better understanding of the workloads seen on web sites that serve such content. In this paper, we study the characteristicsof a medium-sized personalized web site, NYUHOME, which is a customizable portal used by approximately 44,000 users from the New York University community. Our study leverages detailed server-side overheads, and the client-perceived request latencies. We then use these statistics to derive general implications for efficient caching and edge generation of dynamic content in the context of our ongoing CONCA project. Our study verifies both the need for and likely benefit from caching content at sub-document granularity, and points to additional opportunities for reducing client-perceived latency using prefetching, access prediction, and content transcoding.
-
TR2002-827
2002
StatStream: Statistical Monitoring of Thousands of Data Streams in Real Time
Zhu, Yunyue;
Shasha, Dennis
Abstract
|
PDF
Title: StatStream: Statistical Monitoring of Thousands of Data Streams in Real Time
Author(s): Zhu, Yunyue; Shasha, Dennis
Abstract:
Consider the problem of monitoring tens of thousands of time series data streams in an online fashion and making decisions based on them. In addition to single stream statistics such as average and standard deviation, we also want to find high correlations among all pairs of streams. A stock market trader might use such a tool to spot arbitrage opportunities. This paper proposes efficient methods for solving this problem based on Discrete Fourier Transforms and a three level time interval hierarchy. Extensive experiments on synthetic data and real world financial trading data show that our algorithm beats the direct computation approach by several orders of magnitude. It also improves on previous Fourier Transform approaches by allowing the efficient computation of time-delayed correlation over any size sliding window and any time delay. Correlation also lends itself to an efficient grid-based data structure.The result is the first algorithm that we know of to compute correlations over thousands of data streams in real time. The algorithm is incremental,has fixed response time, and can monitor the pairwise correlations of 10,000 streams on a single PC. The algorithm is embarrassingly parallelizable.
-
TR2000-811
2001
Genomics via Optical Mapping IV: Sequence Validation via Optical Map Matching
Antoniotti, Marco;
Anantharaman, Thomas; Paxia, Salvatore; Mishra, Bud
Abstract
|
PDF
Title: Genomics via Optical Mapping IV: Sequence Validation via Optical Map Matching
Author(s): Antoniotti, Marco; Anantharaman, Thomas; Paxia, Salvatore; Mishra, Bud
Abstract:
This paper describes the unerlying mathematical model and the dynamic programming algorithm technique for the valicdation of a (DNA) sequence against a (DNA) map. The sequence can be obtained from a variety of sources (r,g, GenBAnk, Sanger's Lab, or Celera P.E.) and it is assumed to be written out as a string of nucleotides. The map is an ordered restriction map obtained through an optical mapping process and is augmented with statistical information which will ne used to place (or not) the sequence in the genome.
Our approach has many other applications beyond validation: e.g. map-based sequence assembly, phasing sequence contigs, detecting and closing gaps and annotation of partially sequenced genomes to find open reading frames, genes and synteny groups.
We tested our system by checking various maps against publicly available sequence data for Plasmodium falciparum.
-
Ph.D. Thesis
2001
Knowledge Discovery in Databases for Intrusion Detection, Disease Classification and Beyond
Berger, Gideon
Abstract
|
PDF
Title: Knowledge Discovery in Databases for Intrusion Detection, Disease Classification and Beyond
Candidate: Berger, Gideon
Advisor(s): Mishra, Bud
Abstract:
As the number of networked computers grows and the amount of sensitive information available on them grows as well there is an increasing need to ensure the security of these systems. The security of computer networks is not a new issue. We have dealt with the need for security for a long time with such measures as passwords and encryption. These will always provide an important initial line of defense. However, given a clever and malicious individual these defenses can often be circumvented. Intrusion detection is therefore needed as another way to protect computer systems. This thesis describes a novel three stage algorithm for building classification models in the presence of non-stationary, temporal, high dimensional data, in general, and for detecting network intrusion detections, in particular. Given a set of training data records the algorithm begins by identifying "interesting'' temporal patterns in this data using a modal logic. This approach is distinguished from other work in this area where frequent patterns are identified. We show that when frequency is replaced by our measure of "interestingness'' the problem of finding temporal patterns is NP-complete. We then offer an efficient heuristic approach that has proven effective in experiments. Having identified interesting patterns, these patterns then become the predictor variables in the construction of a Multivariate Adaptive Regression Splines (MARS) model. This approach will be justified, after surveying other methods for solving the classification problem, by its ability to capture complex nonlinear relationships between the predictor and response variables which is comparable to other heuristic approaches such as neural networks and classification trees, while offering improved computational properties such as rapid convergence and interpret-ability. After considering a variety of approaches to the problems of over-fitting which is inherent when modeling high dimensional data and non-stationarity, we describe our approach to addressing these issues through the use of truncated Stein shrinkage. This approach is motivated by showing the inadmissibility of the maximum likelihood estimator (MLE) in the high dimensional (dimension >= 3) data. We then discuss the application of our approach as participants in the 1999 DARPA Intrusion Detection Evaluation where we were able to exhibit the benefits of our approach. Finally, we suggest another area of research where we believe that our work would meet with similar success, namely, the area of disease classification.
-
TR2001-821
2001
Credentialed Secure Communication "Switchboards"
Freudenthal, Eric;
Port, Lawrence; Keenan, Edward; Pesin, Tracy; Karamcheti, Vijay
Abstract
|
PDF
Title: Credentialed Secure Communication "Switchboards"
Author(s): Freudenthal, Eric; Port, Lawrence; Keenan, Edward; Pesin, Tracy; Karamcheti, Vijay
Abstract:
Software development in distributed computation is complicated by the extra overhead of communication between connected, dispersed hosts in dynamically changing, multiple administrative domains. Many disparate technologies exist for trust management, authentication, secure communication channels, and service discovery, but composing all of these elements into a single system can outweigh principal development efforts. The NYU Disco Switchboard consolidates these connectivity issues into a single convenient, extensible architecture, providing an abstraction for managing secure, host-pair communication with connection monitoring facilities. Switchboard extends the secure authenticated communication channel abstraction provided by standard interfaces such as SSL/TLS with mechanisms to support trust management, key sharing, service discovery, and connection liveness and monitoring. We present an extensible architecture which is particularly useful in dynamically changing, distributed coalition environments. Applications that utilize Switchboard benefit from the availability of authentication, trust management, cryptography, and discovery, while retaining the simplicity of a common interface.
-
TR2001-820
2001
DisCo: A Distribution Infrastructure for Securely Deploying Decomposable Services in Partly Trusted Environments
Freudenthal, Eric;
Keenan, Edward; Pesin, Tracy; Port, Lawrence; Karamcheti, Vijay
Abstract
|
PDF
Title: DisCo: A Distribution Infrastructure for Securely Deploying Decomposable Services in Partly Trusted Environments
Author(s): Freudenthal, Eric; Keenan, Edward; Pesin, Tracy; Port, Lawrence; Karamcheti, Vijay
Abstract:
The growing popularity of network-based services and peer-to-peer networks has resulted in situations where components of a distributed application often need to execute in environments that are only partly trusted by the application's owner. Such deployment into partial or unstable trust environments exacerbates the classical problems of distributing decomposable services: authentication and access control, trust management, secure communication, code distribution and installation, and process rights management. Unfortunately, the application developer's burden of coping with these latter issues often dominates the benefits of service distribution. The DisCo infrastructure is specifically targeted to the development of systems and services deployed into coalition environments: networks of users and hosts administered by multiple authorities with changing trust relationships. The DisCo infrastructure provides application-neutral support for the classical problems of distributed services, thereby relieving the developer of the burden of independently managing these features. DisCo also includes support for continuously monitoring established connections, enabling corrective action from an application to cope with changing trust relationships. Our experience with building a secure video distribution service using the DisCo toolkit indicates that the latter permits distributed secure deployment into a partly trusted environment with minimal application developer effort, affording the advantages of natural expression and convenient deployment without compromising on efficiency.
-
TR2001-819
2001
dRBAC: Distributed Role-based Access Control for Dynamic Environments
Freudenthal, Eric;
Pesin, Tracy; Port, Lawrence; Keenan, Edward; Karamcheti, Vijay
Abstract
|
PDF
Title: dRBAC: Distributed Role-based Access Control for Dynamic Environments
Author(s): Freudenthal, Eric; Pesin, Tracy; Port, Lawrence; Keenan, Edward; Karamcheti, Vijay
Abstract:
Distributed Role-Based Access Control (dRBAC) is a scalable, decentralized trust-management and access-control mechanism for systems that span multiple administrative domains. dRBAC represents controlled actions in terms of roles , which are defined within the trust domain of one entity and can be transitively delegated to other roles within a different trust domain. dRBAC utilizes PKI to identify all entities engaged in trust-sensitive operations and to validate delegation certificates. The mapping of roles to authorized name spaces obviates the need to identify additional policy roots. dRBAC distinguishes itself from previous trust management and role-based access control approaches in its support for three features: (1) third-party delegations , which improve expressiveness by allowing an entity to delegate roles outside its namespace when authorized by an explicit delegation of assignment ; (2) valued attributes , which modulate transferred access rights via mechanisms that assign and manipulate numerical values associated with roles; and (3) credential subscriptions , which enable continuous monitoring of established trust relationships using a pub/sub infrastructure to track the status of revocable credentials. This paper describes the dRBAC model, its scalable implementation using a graph-based model of credential discovery and validation, and its application in a larger security context.
-
TR2001-814
2001
Title: Automatic Deployment of Transcoding Components for Ubiquitous, Network-Aware Access to Internet Services
Fu, Xiadong;
Shi, Weisong; Karamacheti, Vijay
Abstract
|
PDF
Title: Title: Automatic Deployment of Transcoding Components for Ubiquitous, Network-Aware Access to Internet Services
Author(s): Fu, Xiadong; Shi, Weisong; Karamacheti, Vijay
Abstract:
Advances in wireless communication together with the growing number of mobile end devices hold the potential of ubiquitous access to sophisticated internet services; however, such access must cope with an inherent mismatch between the low-bandwidth, limited-resource characteristics of mobile devices and the high-bandwidth expectations of many content-rich services. One promising way of bridging this gap is by deploying application-specific components on the path between the device and service, which perform operations such as protocol conversion and content transcoding. Although several researchers have proposed infrastructures allowing such deployment, most rely on static, hand-tuned deployment strategies restricting their applicability in dynamic situations.
In this paper, we present an automatic approach for the dynamic deployment of such transcoding components, which can additionally be dynamically reconfigured as required. Our approach relies on three components: (a) a high-level integrated type-based specification of components and network resources, essential for "late binding" components to paths; (b) an automatic path creation strategy that selects and maps components so as to optimize a global metric; and (c) system support for low-overhead path reconfiguration, consisting of both restrictions on component interfaces and protocols satisfying application semantic continuity requirements. We comprehensively evaluate the effectiveness of our approach over a range of network and end-device characteristics using both a web-access scenario where client performance is for reduced access time, and a streaming scenario where client preference is for increased throughput. Our results verify that (1) automatic path creation and reconfiguration is achievable and does in fact yield substantial performance benefits; and (2) that despite their flexibility, both path creation and reconfiguration can be supported with low run-time overhead.
-
Ph.D. Thesis
2001
Algorithms for Rendering in Artistic Styles
Hertzmann, Aaron
Abstract
|
PDF
Title: Algorithms for Rendering in Artistic Styles
Candidate: Hertzmann, Aaron
Abstract:
We describe new algorithms and tools for generating paintings, illustrations, and animation on a computer. These algorithms are designed to produce visually appealing and expressive images that look hand-painted or hand-drawn. In many contexts, painting and illustration have many advantages over photorealistic computer graphics, in aspects such as aesthetics, expression, and computational requirements. We explore three general strategies for non-photorealistic rendering:
First, we describe explicit procedures for placing brush strokes. We begin with a painterly image processing algorithm inspired by painting with real physical media. This method produces images with a much greater subjective impression of looking hand-made than do earlier methods. By adjusting algorithm parameters, a variety of styles can be generated, such as styles inspired by the Impressionists and the Expressionists. This method is then extended to processing video, as demonstrated by painterly animations and an interactive installation. We then present a new style of line art illustration for smooth 3D surfaces. This style is designed to clearly convey surface shape, even for surfaces without predefined material properties or hatching directions.
Next, we describe a new relaxation-based algorithm, in which we search for the painting that minimizes some energy function. In contrast to the first approach, we ideally only need to specify what we want, not how to directly compute it. The system allows as fine user control as desired: the user may interactively change the painting style, specify variations of style over an image, and/or add specific strokes to the painting.
Finally, we describe a new framework for processing images by example, called ``image analogies.'' Given an example of a painting or drawing (e.g. scanned from a hand-painted source), we can process new images with some approximation to the style of the painting. In contrast to the first two approaches, this allows us to design styles without requiring an explicit technical definition of the style. The image analogies framework supports many other novel image processing operations.
-
TR2001-823
2001
Fast Solvers and Domain Decomposition Preconditioners for Spectral Element Discretizations of Problems in H(curl)
Hientzsch, Bernhard
Abstract
|
PDF
Title: Fast Solvers and Domain Decomposition Preconditioners for Spectral Element Discretizations of Problems in H(curl)
Author(s): Hientzsch, Bernhard
Abstract:
For problems with piecewise smooth solutions, spectral element methods hold great promise. They combine the exponential convergence of spectral methods with the geometric flexibility of finite elements. Spectral elements are well-established for scalar elliptic problems and problems of fluid dynamics, and recently the first methods for problems in H(curl) and H(div) were proposed. In this dissertation we study spectral element methods for a model problem. We first consider Maxwell's equation and derive the model problem in H(curl). Then we introduce anisotropic spectral Nédélec element discretizations with variable numerical integration for the model problem. We discuss their structure, and their convergence and approximation properties. We also obtain results on the norm of the Nédélec interpolants between Nédélec and Raviart-Thomas spaces of different degree, needed for the computation of the splitting constant for the domain decomposition preconditioner and the numerical analysis of nonlinear equations. We also prove a Friedrichs-like inequality for the model problem for the spectral case.
We present fast direct solvers for the model problem on separable domains, taking advantage of the tensor product discretization and fast diagonalization methods. We use those fast solvers as local solvers in domain decomposition methods for problems that are too large to be solved directly, or posed on non-separable domains, and use them to compute and subassemble the Schur complement system corresponding to the interface. We also apply them in the direct solution of the Schur complement system for general domains.
As an example for the domain decomposition methods that can be implemented with these tools, we introduce overlapping Schwarz methods, both one-level and two-level versions.
We extend the theory for overlapping Schwarz methods to the spectral Nédélec element case. We reduce the proof of the condition number estimate to three basic estimates, and present theoretical and numerical results on those estimates. The technique of the proof works in both the two-dimensional and three-dimensional case.
We also present numerical results for one-level and two-level methods in two dimensions.
-
Ph.D. Thesis
2001
Region-based Register Allocation for EPIC Architectures
Kim, Hansoo
Abstract
|
PDF
Title: Region-based Register Allocation for EPIC Architectures
Candidate: Kim, Hansoo
Advisor(s): Palem, Krishna
Abstract:
Instruction-level parallelism(ILP) is a family of processor and compiler design techniques that speed up execution by allowing individual machine operations. Explicitly Parallel Instruction computing (EPIC) processors evolved in an attempt to achieve high levels of ILP without the hardware complexity. In EPIC processors most of the functions to extract ILP are performed by the compiler. To take advantage higher level of ILP of these architectures, the ILP compiler must use aggressive ILP technique. This opportunity for improved performance comes at the price of increased compilation time.
As the size of the compilation unit is limited, the compilation time can be reduced. But the limited scope of compilation may restrict the scope of optimization. As a result, the compiler may generate less efficient quality of code. Ideally, we want to get smaller compilation time and the same or better execution time as that obtained using the global approach.
In this thesis, we address the problem of the compilation time and execution performance trade-off in region-based compilation within the context of the key optimization of register allocation . We demonstrate that schemes designed for region-based allocation perform as well as or even better than schemes designed for global based allocation while having smaller compilation time. To achieve this goal, we propose several innovative techniques which form the core of this thesis.
We show considerable compilation time savings with comparable execution time performance by synthesizing our techniques in a region-based register allocation. We also explore the relation between the performance of the register allocation and the region size and quantify it. Our research shows selecting the right size of region has the important impact to the performance of register allocation. We proposed the concept of restructuring the regions based on register pressure and discussed how we can estimate the register pressure in order to improve compilation time while maintaining the execution time.
-
TR2001-817
2001
Overlapping Schwarz Algorithms using Discontinuous Iterates for Poisson's Equation
Kimn, Jung-Han
Abstract
|
PDF
Title: Overlapping Schwarz Algorithms using Discontinuous Iterates for Poisson's Equation
Author(s): Kimn, Jung-Han
Abstract:
A new type of overlapping Schwarz methods, the overlapping Schwarz algorithms using discontinuous iterates is constructed from the classical overlapping Schwarz algorithm. It allows for discontinuities at each artificial interface. The new algorithm, for Poisson's equation, can be considered as an overlapping version of Lions' Robin iteration method for which little is known concerning the convergence. Since overlap improves the performance of the classical algorithms considerably, the existence of a uniform convergence factor is the fundamental question for our new algorithm.
The first part of this thesis concerns the formulation of the new algorithm. A variational formulation of the new algorithm is derived from the classical algorithms. The discontinuity of the iterates of the new algorithm is the fundamental distinction from the classical algorithms. To analyze this important property, we use a saddle-point approach. We show that the new algorithm can be interpreted as a block Gauss-Seidel method with dual and primal variables.
The second part of the thesis deals with algebraic properties of the new algorithm. We prove that the fractional steps of the new algorithm are nonsymmetric. The algebraic systems of the primal variables can be reduced to those of the dual variables. We analyze the structure of the dual formulation algebraically and analyze its numerical behavior.
The remaining part of the thesis concerns convergence theory and numerical results for the new algorithm. We first extend the classical convergence theory, without using Lagrange multipliers, in some limited cases. A new theory using Lagrange multiplier is then introduced and we find conditions for the existence of uniform convergence factors of the dual variables, which implies convergence of the primal variables, in the two overlapping subdomain case with any Robin boundary condition. Our condition shows a relation between the given conditions and the artificial interface condition. The numerical results for the general case with cross points are also presented. They indicate possible extensions of our results to this more general case.
-
TR2001-815
2001
Dual-Primal FETI Methods for Three-dimensional Elliptic Problems with Heterogeneous Coefficients
Klawonn, Axel;
Widlund, Olof; Dryja, Maksymilian
Abstract
|
PDF
Title: Dual-Primal FETI Methods for Three-dimensional Elliptic Problems with Heterogeneous Coefficients
Author(s): Klawonn, Axel; Widlund, Olof; Dryja, Maksymilian
Abstract:
In this paper, certain iterative substructuring methods with Lagrange multipliers are considered for elliptic problems in three dimensions. The algorithms belong to the family of dual--primal FETI methods which have recently been introduced and analyzed successfully for elliptic problems in the plane. The family of algorithms for three dimensions is extended and a full analysis is provided for the new algorithms. Particular attention is paid to finding algorithms with a small primal subspace since that subspace represents the only global part of the dual--primal preconditioner. It is shown that the condition numbers of several of the dual--primal FETI methods can be bounded polylogarithmically as a function of the dimension of the individual subregion problems and that the bounds are otherwise independent of the number of subdomains, the mesh size, and jumps in the coefficients. These results closely parallel those for other successful iterative substructuring methods of primal as well as dual type.
-
Ph.D. Thesis
2001
Adversarial Reasoning: A Logical Approach for Computer Go
Klinger, Tamir
Abstract
|
PDF
Title: Adversarial Reasoning: A Logical Approach for Computer Go
Candidate: Klinger, Tamir
Advisor(s): Davis, Ernest
Abstract:
Go is a board game with simple rules but complex strategy requiring ability in almost all aspects of human reasoning. A good Go player must be able to hypothesize moves and analyze their consequences; to judge which areas are relevant to the analysis at hand; to learn from successes and failures; to generalize that knowledge to other ``similar'' situations; and to make inferences from knowledge about a position.
Unlike computer chess, which has seen steady progress since Shannon's [23] and Turing's [24] original papers on the subject, progress on computer Go remains in relative infancy. In computer chess, minimax search with [IMAGE ] - [IMAGE ] pruning based on a simple evaluation function can beat a beginner handily. No such simple evaluation function is known for Go. To accurately evaluate a Go position requires knowledge of the life and death status of the points on the board. Since the player with the most live points at the end of the game wins, a small mistake in this analysis can be disastrous.
In this dissertation we describe the design, performance, and underlying logic of a knowledgebased program that solves life and death problems in the game of Go. Our algorithm applies life and death theory coupled with knowledge about which moves are reasonable for each relevant goal in that theory to restrict the search space to a tractable size. Our results show that simple depth-first search armed with a goal theory and heuristic move knowledge yields very positive results on standard life and death test problems - even without sophisticated move ordering heuristics.
In addition to a description of the program and its internals we present a modal logic useful for describing strategic theories in games and use it to give a life and death theory and to formally state the rules of Go. We also give an axiomatization for this logic using the modal [IMAGE ] calculus [15] and prove some basic theorems of the system.
-
Ph.D. Thesis
2001
Machine Level Optimizations for High Level Languages
Leung, Allen
Abstract
|
PDF
Title: Machine Level Optimizations for High Level Languages
Candidate: Leung, Allen
Advisor(s): Palem, Krishna
Abstract:
Two machine instruction level compiler optimization problems are considered in this work.
The first problem is time-constrained instruction scheduling, i.e., finding optimal schedules for machine code in the presence of time constraints such as release-times and deadlines. These types of time constraints appear naturally in embedded applications, and also as a side effect of many other compiler optimization problems. While the general problem is NP-hard, we have developed a new algorithm which can optimally handle many P-time solvable sub-instances. In fact, we show that almost all previous algorithms in this related area can be seen as an instance of the priority computation scheme that we have developed. Our work extends and unifies many algorithmic results in classical deterministic scheduling theory related to release-times, deadlines and pipeline latencies.
The second problem that we investigate in this work is scalar optimizations in machine code. We present a new framework that utilizes static single assignment form (SSA) at the level of individual machine instructions. Complementing the framework, we have also developed new SSA construction algorithms which are faster than previous algorithms, and are very simple to implement.
-
Ph.D. Thesis
2001
Exact Geometric Computation: Theory and Applications
Li, Chen
Abstract
|
PDF
Title: Exact Geometric Computation: Theory and Applications
Candidate: Li, Chen
Abstract:
Exact Geometric Computation: Theory and Applications Abstract This dissertation explores the theory and applications of Exact Geometric Computation (EGC), a general approach to robust geometric computing. The contributions of this thesis are organized into three parts. A fundamental task in EGC is to support exact comparison of algebraic expressions. This leads to the problem of constructive root bounds for algebraic expressions. Such root bounds determine the worst-case complexity of exact comparisons. In the first part, we present a new constructive root bound which, compared to previous bounds, can give dramatically better performance in many common computations involving divisions and radical roots. We also improve the well-known degree-measure bound by exploiting the sharing of common sub-expressions. In the second part, we discuss the design and implementation of the Core Library, a C++ library which embraces the EGC approach to robust numerical and geometric computation. Our design emphasizes ease of use and facilitates the rapid development of robust geometric applications. It allows non-specialist programmers to add robustness into new or existing applications with little extra effort. A number of efficiency and implementation issues are investigated. Although focused on geometric computation, the EGC techniques and software we developed can be applied to other areas where it is critical to guarantee numerical precision. In the third part, we introduce a new randomized test for the vanishing of multivariate radical expressions. With this test, we develop a probabilistic approach to proving elementary geometry theorems about ruler-and-compass constructions. A probabilistic theorem prover based on this approach has been implemented using the Core Library. We present some empirical data. -
TR2001-816
2001
A Dual-Primal FETI Method for Incompressible Stokes Equations
Li, Jing
Abstract
|
PDF
Title: A Dual-Primal FETI Method for Incompressible Stokes Equations
Author(s): Li, Jing
Abstract:
In this paper, a dual-primal FETI method is developed for incompressible Stokes equation approximated by mixed finite elements with discontinuous pressures. The domain of the problem is decomposed into nonoverlapping subdomains, and the continuity of the velocity across the subdomain interface is enforced by introducing Lagrange multipliers. By a Schur complement procedure, solving the indefinite Stokes problem is reduced to solving a symmetric positive definite problem for the dual variables, i.e., the Lagrange multipliers. This dual problem is solved by a Krylov space method with a Dirichlet preconditioner. At each step of the iteration, both subdomain problems and a coarse problem on the course subdomain mesh are solved by a direct method. It is proved that the condition number of this preconditioned problem is independent of the number of subdomains and bounded from above by the product of the inverse of the inf-sup constant of the discrete problem and the square of the logarithm of the number of unknowns in the individual subdomain problems. Illustrative results are presented by solving a lid driven cavity problem.
-
Ph.D. Thesis
2001
An On-Line Handwriting Recognizer with Fisher Matching, Hypotheses Propagation Network and Context Constraint Models
Oh, Jong
Abstract
|
PDF
Title: An On-Line Handwriting Recognizer with Fisher Matching, Hypotheses Propagation Network and Context Constraint Models
Candidate: Oh, Jong
Advisor(s): Geiger, Davi
Abstract:
We have developed an on-line handwriting recognition system. Our approach integrates local bottom-up constructs with a global top-down measure into a modular recognition engine. The bottom-up process uses local point features for hypothesizing character segmentations and the top-down part performs shape matching for evaluating the segmentations. The shape comparison, called Fisher segmental matching, is based on Fisher's linear discriminant analysis. The component character recognizer of the system uses two kinds of Fisher matching based on different representations and combines the information to form the multiple experts paradigm.
Along with an efficient ligature modeling, the segmentations and their character recognition scores are integrated into a recognition engine termed Hypotheses Propagation Network (HPN), which runs a variant of topological sort algorithm of graph search. The HPN improves on the conventional Hidden Markov Model and the Viterbi search by using the more robust mean-based scores for word level hypotheses and keeping multiple predecessors during the search.
We have also studied and implemented a geometric context modeling termed Visual Bigram Modeling that improves the accuracy of the system's performance by taking the geometric constraints into account, in which the component characters in a word can be formed in relation with the neighboring characters. The result is a shape-oriented system, robust with respect to local and temporal features, modular in construction and has a rich range of opportunities for further extensions.
-
Ph.D. Thesis
2001
Continuous Model for Salient Shape Selection and Representation
Pao, Hsing-Kuo (Kenneth)
Abstract
|
PDF
Title: Continuous Model for Salient Shape Selection and Representation
Candidate: Pao, Hsing-Kuo (Kenneth)
Advisor(s): Geiger, Davi
Abstract:
We propose a new framework for shape representation and salient shape selection. The framework is considered as a low- to middle-level vision process. The framework can be applied to various topics, including figure/ground separation, searching of the shape axis, junction detection and illusory figure finding. The model construction is inspired by the Gestalt studies. They suggest that proximity, convexity, similarity, good continuation, closure, symmetry, etc, are useful for figure/ground separation and visual organization construction. First, we quantify those attributes for (completed or partial) shapes by our distributed systems. The shape will be evaluated and represented by those results. In particular, the shape convexity, rather than other shape attributes like the symmetry axis or size which were well-studied before, will be emphasized in our discussion. Our problem is proposed in a continuous manner. For the shape convexity, unlike the conventional mathematical definition, we are aimed at deriving a definition to describe a shape ``more convex'' or ``less convex'' than the other. To search the shape axis, more than a binary information telling a point on or off any axis, a continuous information will be obtained. We distinguish axes with ``stronger'' or ``weaker'' declarations. An Easy and natural scheme of pruning can be applied by such representation. For the junction detection, we do not assume any artificial threshold. Instead, the transition from low-curvature to high-curvature curves or curves with discontinuities will be shown by our representation. The model is based on a variational approach, provided by the minimization of the data fitting error as well as the neighborhood discrepancy. Two models will be proposed, the decay diffusion process and the orientation diffusion process.
-
TR2001-813
2001
Title: Balancing Neumann-Neumann Methods for Incompressible Stokes Equations
Pavarino, Luca;
Widlund, Olof
Abstract
|
PDF
Title: Title: Balancing Neumann-Neumann Methods for Incompressible Stokes Equations
Author(s): Pavarino, Luca; Widlund, Olof
Abstract:
This paper describes the unerlying mathematical model and the Balancing Neumann-Neumann methods are introduced and studied for incompressible Stokes equations discretized with mixed finite or spectral elements with discontinuous pressures. After decomposing the original domain of the problem into nonoverlapping subdomains, the interior unknowns, which are the interior velocity component and all except the constant pressure component, of each subdomain problem are implicitly eliminated. The resulting saddle point Schur complement is solved with a Krylov space method with a balancing Neumann-Neumann preconditioner based on the solution of a coarse Stokes problem with a few degrees of freedom per subdomain and on the solution of local Stokes problems with natural %Neumann velocity and essential boundary conditions on the subdomains. This preconditioner is of hybrid form in which the coarse problem is treated multiplicatively while the local problems are treated additively. The condition number of the preconditioned operator is independent of the number of subdomains and is bounded from above by the product of the square of the logarithm of the local number of unknowns in each subdomain and the inverse of the inf-sup constants of the discrete problem and of the coarse subproblem. Numerical results show that the method is quite fast; they are also fully consistent with the theory.
-
TR2001-822
2001
Modeling Object Characteristics of Dynamic Web Content
Shi, Weisong;
Collins, Eli; Karamcheti, Vijay
Abstract
|
PDF
Title: Modeling Object Characteristics of Dynamic Web Content
Author(s): Shi, Weisong; Collins, Eli; Karamcheti, Vijay
Abstract:
Requests for dynamic and personalized content increasingly dominate current-day Internet traffic, driven both by a growth in dynamic web services and a ``trickle-down'' effect stemming from the effectiveness of caches and content-distribution networks at serving static content. To efficiently serve this trend, several server-side and cache-side techniques have recently been proposed. Although such techniques, which exploit different forms of reuse at the sub-document level, appear promising, a significant impediment to their widespread deployment is (1) the absence of good models describing characteristics of dynamic web content, and (2) the lack of effective synthetic content generators, which reduce the effort involved in verifying the effectiveness of a proposed solution.
This paper addresses both of these shortcomings. Its primary contribution is a set of models that capture the characteristics of dynamic content both in terms of independent parameters such as the distributions of object sizes and their freshness times, as well as derived parameters such as content reusability across time and linked documents. These models are derived from an analysis of the content from six representative news and e-commerce sites, using both size-based and level-based splitting techniques to infer document objects. A secondary contribution is a Tomcat-based dynamic content emulator, which uses these models to generate ESI-based dynamic content and serve requests for whole document and separate objects. To validate both the models and the design of the content emulator, we compare the bandwidth requirements seen by an idealized cache simulator that is driven by both the real trace and emulated content. Our simulation results verify that the output of the content emulator effectively and efficiently models real content.
-
Ph.D. Thesis
2001
Language Support for Program Generation Reasoning, Implementation, and Applications
Yang, Zhe
Abstract
|
PDF
Title: Language Support for Program Generation Reasoning, Implementation, and Applications
Candidate: Yang, Zhe
Advisor(s): Danvy, Olivier; Goldberg, Benjamin
Abstract:
This dissertation develops programming languages and associated techniques for sound and efficient implementations of algorithms for program generation.
First, we develop a framework for practical two-level languages. In this framework, we demonstrate that two-level languages are not only a good tool for describing program-generation algorithms, but a good tool for reasoning about them and implementing them as well. We pinpoint several general properties of two-level languages that capture common proof obligations of program-generation algorithms:
- To prove that the generated program behaves as desired, we use an erasure property to reduce the two-level proof obligation to a simpler one-level proof obligation.
- To prove that the generated program satisfies certain syntactic constraints, we use a type-preservation property for a refined type system that enforces these constraints.
In addition, to justify concrete implementations, we use a native embedding of a two-level language into a one-level language.
We present two-level languages with these properties both for a call-by-name object language and for a call-by-value object language with computational effects, and demonstrate them through two classes of non-trivial applications: one-pass transformations into continuation-passing style and type-directed partial evaluation for call-by-name and for call-by-value.
Next, to facilitate implementations, we develop several general approaches to programming with type-indexed families of values within the popular Hindley-Milner type system. Type-indexed families provide a form of type dependency, which is employed by many algorithms that generate typed programs, but is absent from mainstream languages. Our approaches are based on type encodings, so that they are type safe. We demonstrate and compare them through a host of examples, including type-directed partial evaluation and printf-style formatting.
Finally, upon the two-level framework and type-encoding techniques, we recast a joint work with Bernd Grobauer, where we formally derived a suitable self application for type-directed partial evaluation, and achieved automatic compiler generation.
-
TR2001-818
2001
Enforcing Resource Sharing Agreements among Distributed Server Cluster
Zhao, Tao;
Karamcheti, Vijay
Abstract
|
PDF
Title: Enforcing Resource Sharing Agreements among Distributed Server Cluster
Author(s): Zhao, Tao; Karamcheti, Vijay
Abstract:
Future scalable, high throughput, and high performance applications are likely to execute on platforms constructed by clustering multiple autonomous distributed servers, with resource access governed by agreements between the owners and users of these servers. As an example, application service providers (ASPs) can pool their resources together according to pre-specified sharing agreements to provide better services to their customers. Such systems raise several new resource management challenges, chief amongst which is the enforcement of agreements to ensure that, despite the distributed nature of both requests and resources, user requests only receive a predetermined share of the aggregate resource and that the resources of a participant are not misused. Current solutions only enforce such agreements at a coarse granularity and in a centralized fashion, limiting their applicability for general workloads.
This paper presents an architecture for the distributed enforcement of resource sharing agreements. Our approach exploits a uniform application-independent representation of agreements, and combines it with efficient time-window based coordinated queuing algorithms running on multiple nodes. We have successfully implemented this general strategy in two different network layers: a layer-7 HTTP redirector and a layer-4 packet redirector, which redirect connection requests from distributed clients to a cluster of distributed servers. Our measurements of both implementations verify that our approach is general and effective: different client groups receive service commensurate with their agreements.
-
Ph.D. Thesis
2000
SETL for Internet Data Processing
Bacon, David
Abstract
|
PDF
Title: SETL for Internet Data Processing
Candidate: Bacon, David
Advisor(s): Schwartz, Jack
Abstract:
Although networks and coordinated processes figure prominently in the kinds of data manipulation found in everything from scientific modeling to large-scale data mining, programmers charged with setting up the requisite software systems frequently find themselves hampered by the inadequacy of available languages. The ``real'' languages such as C++ and Java tend to be low-level, requiring the specification of a great deal of often repetitive detail, whereas the higher-level ``scripting'' languages tend to lack the kinds of structuring facilities that lend themselves to the reliable construction of even modestly large systems.
The high-level language SETL meets both of these needs. Originally conceived as a language which aimed to bring programming a little closer to the idealized world of mathematics, making it extremely useful in the human-to-human communication of algorithms, SETL has proven itself over the years to be an excellent language for software prototyping, primarily because its conciseness and immediacy lend it well to rapid experimentation. These characteristics, together with its general freedom from machine-oriented restrictions, its value semantics, its comprehension-style constructors for aggregates, its skill with strings, and especially its syntactic support for mappings, also make it well suited to high-level data processing.
In order to play the role of a full-fledged modern data processing language, however, SETL had to acquire the ability to manipulate processes and communicate with them easily, and furthermore to be able to work with networks, particularly the client-server model that rules the Internet. Accordingly, I have integrated a full set of process and network management features into SETL. In my dissertation, I show how the liberal use of fullweight processes, with the high, protective walls that surround them, sustains a modular design approach which in turn provides a strong defense against the main hazards of distributed computing, namely race conditions and deadlock, while preserving the luxury and convenience of programming in a truly high-level language. To this end, I have evolved protocols and design patterns for developing multiplexing servers and clients in SETL, and in my talk, will present examples of fairly complex systems where hierarchies of processes communicate over the network. Such systems tend to be notorious for their unreliability, but in these instances, robustness seems to follow naturally from the readability of simple programs written in an ancient and friendly language.
-
TR2000-802
2000
Continuous Shape Transformation and Metrics on Shapes
Davis, Ernest
Abstract
|
PDF
Title: Continuous Shape Transformation and Metrics on Shapes
Author(s): Davis, Ernest
Abstract:
A natural approach to defining continuous change of shape is in terms of a metric that measures the difference between two regions. We consider four such metrics over regions: the Hausdorff distance, the dual-Hausdorff distance, the area of the symmetric difference, and the optimal-homeomorphism metric. Each of these gives a different criterion for continuous change. We establish qualitative properties of all of these; in particular, the continuity of basic functions such as union, intersection, set difference, area, distance, and the boundary function; the transition graph between RCC relations (Randell, Cui, and Cohn, 1992). We discuss the physical significance of these different criteria.
We also show that the history-based definition of continuity proposed by Muller (1998) is equivalent to continuity with respect to the Hausdorff distance. An examination of the difference between the transition rules that we have found for the Hausdorff distance and the transition theorems that Muller derives leads to the conclusion that Muller's analysis of state transitions is not adequate. We propose an alternative characterization of transitions in Muller's first-order language over histories.
-
TR2000-809
2000
Describing Spatial Transitions Using Mereotopological Relations Over Histories
Davis, Ernest
Abstract
|
PDF
Title: Describing Spatial Transitions Using Mereotopological Relations Over Histories
Author(s): Davis, Ernest
Abstract:
Muller (1998) develops a language of motion and shape change in terms of topological relations and temporal order relations between regions of space-time (histories). He uses this language to state and prove the transition rules developed in (Randell, Cui, and Cohn, 1992) that constrain the changes in spatial relations possible for objects whose shape changes continuously. Unfortunately, Muller's statement of the transition rules is inadequate. This paper presents an alternative statement of these transition rules.
-
Ph.D. Thesis
2000
A Rigorous Framework for Fully Supporting the IEEE Standard for Floating-Point Arithmetic in High-Level Programming Languages
Figueroa, Sam
Abstract
|
PDF
Title: A Rigorous Framework for Fully Supporting the IEEE Standard for Floating-Point Arithmetic in High-Level Programming Languages
Candidate: Figueroa, Sam
Advisor(s): Dewar, Robert
Abstract:
Processors conforming to the IEEE Standard for Floating-Point Arithmetic have been commonplace for some years, and now several programming languages seem to support or conform to this standard, from hereon referred to as ``the IEEE Standard.'' For example, The Java Language Specification by Gosling, Joy, and Steele, which defines the Java language, frequently mentions the IEEE Standard. Indeed, Java, as do other languages, supports some of the features of the IEEE Standard, including a couple floating-point data formats, and even requires (in section 4.2.4 ``Floating-Point Operations'' of the aforementioned book) that ``operators on floating-point numbers behave exactly as specified by IEEE 754.''
Arguing that the support current languages offer is not enough, this thesis establishes clear criteria for what it means to fully support the IEEE Standard in a programming language. Each aspect of the IEEE Standard is examined in detail from the point of view of how various arithmetic engines implement that aspect of the IEEE Standard, how different languages (and implementations thereof) support it, and what the range of options are in supporting that aspect. Practical recommendations are then offered (particularly, but not exclusively, for Ada and Java), taking, for example, programmer convenience and impact on performance into consideration. A detailed model specification following these recommendations is provided for the Ada language.
In addition, a variety of issues related to the floating-point aspects of programming languages are discussed, so as to serve as a more complete guide to language designers. One such issue is floating-point expression evaluation schemes, and, more specifically, whether bit-for-bit identical results are actually achievable on a variety of platforms that conform to the IEEE Standard, as the Java language promises. Closely tied to this issue is that of double rounding, which occurs when a (possibly intermediate) result is rounded more than once before subsequent use or before being delivered to its final destination. So this thesis discusses when double rounding makes a difference, how it can be avoided, and what the performance impact is in avoiding it.
-
TR2000-808
2000
CANS: Composable, Adaptive Network Services Infrastructure
Fu, Xiaodong;
Shi, Weisong; Akkerman, Anatoly; Karamcheti, Vijay
Abstract
|
PDF
Title: CANS: Composable, Adaptive Network Services Infrastructure
Author(s): Fu, Xiaodong; Shi, Weisong; Akkerman, Anatoly; Karamcheti, Vijay
Abstract:
The growth of the internet has been fueled by an increasing number of sophisticated network-accessible services. Unfortunately, the high bandwidth and processing requirements of such services is at odds with current trends towards increased variation in network characteristics and a large diversity in end devices. Ubiquitous access to sucr services requires the injection of additional functionality into the network to handle protocol conversion, data transcoding, and in general bridge disparate portions of the physical network. Several researchers have proposed infrastructures for injecting such functionality; however, many challenges remain before these infrastructures can be widely deployed.
CANS is an application-level infrastructure for injecting application-specific components into the network that focuses on three such challenges: (a) efficient and dynamic composition of individual components; (b) dynamic and distributed adaptation of injected components in response to system conditions; and (c) support for legacy applications and services. The network view supported by CANS consists of applications, stateful services, and data paths between them built up from mobile soft-state objects called drivers. Both services and data paths can be dynamically created and reconfigured: a planning and event propagation model assists in distributed adaptation, and a run-time type-based composition model dictates how new services and drivers are integrated with existing components. An interception layer that virtualizes network bindings permits legacy applications to plug into the CANS infrastructure, and a delegation model does the same for legacy services.
This paper describes the CANS architecture and implementation, and a case study involving a shrink-wrapped client application in a dynamically changing network environment where CANS was used to improve overall user experience.
-
Ph.D. Thesis
2000
A Language-Theoretic Approach to Algorithms
Goyal, Deepak
Abstract
|
PDF
Title: A Language-Theoretic Approach to Algorithms
Candidate: Goyal, Deepak
Advisor(s): Paige, Bob
Abstract:
An effective algorithm design language should be 1) wide-spectrum in nature, i.e. capable of expressing both abstract specifications and low-level implementations, and 2) "computationally transparent", i.e. facilitate accurate estimation of time and space requirements. The conflict between these requirements is exemplified by SETL which is wide-spectrum, but lacks computational transparency because of its reliance on hash-based data structures. The first part of this thesis develops an effective algorithm design language, and the second demonstrates its usefulness for algorithm explanation and discovery.
In the first part three successively more abstract set-theoretic languages are developed and shown to be computationally transparent. These languages can collectively express both abstract specifications and low-level implementations. We formally define a data structure selection method for these languages using a novel type system. Computational transparency is obtained for the lowest-level language through the type system, and for the higher-level languages by transformation into the next lower level. We show the effectiveness of this method by using it to improve a difficult database query optimization algorithm from expected to worst-case linear time. In addition, a simpler explanation and a shorter proof of correctness are obtained.
In the second part we show how our data structure selection method can be made an effective third component of a transformational program design methodology whose first two components are finite differencing and dominated convergence. Finite differencing replaces costly repeated computations by cheaper incremental counterparts, and dominated convergence provides a generalized iteration scheme for computing fixed-points. This methodology has led us to a simpler explanation of a complex linear-time model-checking algorithm for the alternation-free modal mu-calculus, and to the discovery of an O ( N 3 ) time algorithm for computing intra-procedural may-alias information that improves over an existing O ( N 5 ) time algorithm.
-
TR2000-801
2000
Paint By Relaxation
Hertzmann, Aaron
Abstract
|
PDF
Title: Paint By Relaxation
Author(s): Hertzmann, Aaron
Abstract:
We use relaxation to produce painted imagery from images and video. An energy function is first specified; a painting is then generated by performing a search for a painting with minimal energy. The appeal of this strategy is that, ideally, we need only specify what we want, not how to directly compute it. Because the energy function is very difficult to optimize, we use a relaxation algorithm combined with various search heuristics.
This formulation allows us to specify painting style by varying the relative weights of energy terms. The basic energy function yields an economical painting that effectively conveys an image with few strokes. This economical style produces moderate temporal coherence when processing video, without losing the essential 2D quality of the painting. The system allows as fine user control as desired: the user may interac-tively change the painting style, specify variations of style over an image, and/or add specific strokes to the painting. Procedural stroke textures may be used to enhance visual appeal.
-
Ph.D. Thesis
2000
Supporting a Flexible Parallel Programming Model on a Network of Non-Dedicated Workstations
Huang, Shih-Chen
Abstract
|
PDF
Title: Supporting a Flexible Parallel Programming Model on a Network of Non-Dedicated Workstations
Candidate: Huang, Shih-Chen
Advisor(s): Kedem, Zvi
Abstract:
A network of non-dedicated workstations can provide computational resources at minimal or no additional cost. If harnessed properly, the combined computational power of these otherwise ``wasted'' resources can outperform even mainframe computers. Performing demanding computations on a network of non-dedicated workstations efficiently has previously been studied, but inadequate handling of the unpredictable behavior of the environment and possible failures resulted in limited success only.
This dissertation presents a shared memory software system for executing programs with nested parallelism and synchronization on a network of non-dedicated workstations. The programming model exhibits a very convenient and natural programming style and is especially suitable for computations whose complexity and parallelism emerges only during their execution, such as in divide and conquer problems. To both support and take advantage of the flexibility inherent in the programming model, an architecture that distributes both the shared memory management and the computation is developed. This architecture removes bottlenecks inherent in centralization, thus enhancing scalability and dependability. By adapting available resource dynamically and coping with unpredictable machine slowdowns and failures, the system also supports dynamic load balancing, and fault tolerance--both transparently to the programmer.
-
Ph.D. Thesis
2000
Global Optimization Using Embedded Graphs
Ishikawa, Hiroshi
Abstract
|
PDF
Title: Global Optimization Using Embedded Graphs
Candidate: Ishikawa, Hiroshi
Advisor(s): Geiger, Davi
Abstract:
One of the challenges of computer vision is that the information we seek to extract from images is not even defined for most images. Because of this, we cannot hope to find a simple process that produces the information directly from a given image. Instead, we need a search, or an optimization, in the space of parameters that we are trying to estimate.
In this thesis, I introduce two new optimization methods that use graph algorithms. They are characterized by their ability to find a global optimum efficiently. Each method defines a graph that can be seen as embedded in a Euclidean space. Graph- theoretic entities such as cuts and cycles represent geometric objects that embody the information we seek.
The first method finds a hypersurface in a Euclidean space that minimizes a certain kind of energy functional. The hypersurface is approximated by a cut of an embedded graph so that the total cost of the cut corresponds to the energy. A globally optimal solution is found by using a minimum cut algorithm. In particular, it can globally solve first order Markov Random Field problems in more generality than was previously possible. I prove that the convexity of the smoothing function in the energy is essential for the applicability of the method and provide an exact criterion in terms of the MRF energy.
The second method proposed here efficiently finds an optimal cycle in a Euclidean space. It uses a minimum ratio cycle algorithm to find a cycle with minimum energy in an embedded graph. In the case of two dimensions, the energy can depend not only on the cycle itself but also on the region defined by the cycle. Because of this, the method unifies the two competing views of boundary and region segmentation.
I demonstrate the utility of the methods in applications, with the results of experiments in the areas of binocular stereo, image restoration, and image segmentation. The image segmentation, or contour extraction, experiments are carried out in various situations using different types of information, for example motion, stereo, and intensity.
-
Ph.D. Thesis
2000
On the Use of Functionals on Boundaries in Hierarchical Models of Object Recognition
Jermyn, Ian
Abstract
|
PDF
Title: On the Use of Functionals on Boundaries in Hierarchical Models of Object Recognition
Candidate: Jermyn, Ian
Advisor(s): Geiger, Davi
Abstract:
Object recognition is a central problem in computer vision. Typically it is assumed to follow a sequential model in which successively more specific hypotheses are generated about the image. This is a rather simplistic model, allowing as it does no margin for error at any point. We follow a more general approach in which the various representations involved are allowed to influence one another from the outset. As a guide and ultimate goal, we study the problem of finding the region occupied by human beings in images, and the separation of the region into arms, legs and head. We approach the problem as that of defining a functional on the space of boundaries in images whose minimum specifies the region occupied by the human figure. Previous work that uses such functionals suffers from a number of difficulties. These include an uncontrollable dependence on scale, an inability to find the global minimum for boundaries in polynomial time, and the inability to include region as well as boundary information. We present a new form of functional on boundaries in a manifold that solves these problems, and is also the unique form of functional in a specific class that possesses a non-trivial, efficiently computable global minimum. We describe applications of the model to single images and to the extraction of boundaries from stereo pairs and motion sequences. In addition, the functionals used in previous work could not include information about the shape of the region sought. We develop a model for the part structures of boundaries that extends previous work to the case of real images, thus including shape information in the functional framework. We show that such part structures are hyperpaths in a hypergraph. An `optimal hyperpath' algorithm is developed that globally minimizes the functional under some conditions. We show how to use exemplars of a shape to construct a functional that includes specific information about the topology of the part structure sought. An algorithm is developed that globally minimizes such functionals in the case of a fixed boundary. The behaviour of the functional mimics an aspect of human shape comparison.
-
TR2000-803
2000
Verifying a Design Pattern for the Fault-Tolerant Execution of Parallel Programs
Kindler, Ekkart;
Shasha, Dennis
Abstract
|
PDF
Title: Verifying a Design Pattern for the Fault-Tolerant Execution of Parallel Programs
Author(s): Kindler, Ekkart; Shasha, Dennis
Abstract:
We present a protocol for the fault-tolerant execution of parallel programs. The protocol leaves the implementation free to make choices concerning efficiency tradeoffs. Thus, we are proposing a design pattern rather than a fully specified algorithm. The protocol is modeled with the help of Petri nets.
Based on the Petri net model, we formally prove the correctness of the design pattern. This verification serves two goals: first, it guarantees the correctness of the design pattern; second, it serves as a test case for the underlying verification technique.
-
TR2000-810
2000
An Overlapping Domain Decomposition Preconditioner for a Class of Discontinuous Galerkin Approximations of Advection-Diffusion Problems
Lasser, Caroline;
Toselli, Andrea
Abstract
|
PDF
Title: An Overlapping Domain Decomposition Preconditioner for a Class of Discontinuous Galerkin Approximations of Advection-Diffusion Problems
Author(s): Lasser, Caroline; Toselli, Andrea
Abstract:
We consider a scalar advection-diffusion problem and a recently proposed discontinuous Galerkin approximation, which employs discontinuous finite element spaces and suitable bilinear forms containing interface terms that ensure consistency. For the corresponding sparse, non-symmetric linear system, we propose and study an additive, two--level overlapping Schwarz preconditioner, consisting of a coarse problem on a coarse triangulation and local solvers associated to suitable problems defined on a family of subdomains.
This is a generalization of the corresponding overlapping method for approximations on continuous finite element spaces. Related to the lack of continuity of our approximation spaces, some interesting new features arise in our generalization, which have no analog in the conforming case.
We prove an upper bound for the number of iterations obtained by using this preconditioner with GMRES, which is independent of the number of degrees of freedom of the original problem and the number of subdomains. The performance of the method is illustrated by several numerical experiments for different test problems, using linear finite elements in two dimensions.
-
Ph.D. Thesis
2000
Delegation Logic: A Logic-based Approach to Distrbuted Authorization
Li, Ninghui
Abstract
|
PDF
Title: Delegation Logic: A Logic-based Approach to Distrbuted Authorization
Candidate: Li, Ninghui
Advisor(s): Feigenbaum, Joan; Siegel, Alan
Abstract:
We address the problem of authorization in large-scale, open, distributed systems. Authorization decisions are needed in electronic commerce, mobile-code execution, remote resource sharing, content advising, privacy protection, etc. We adopt the trustmanagement approach, in which “authorization” is viewed as a “proof-of-compliance” problem: Does a set of credentials prove that a request complies with a policy? We develop a logic-based language Delegation Logic (DL) to represent policies, credentials, and requests in distributed authorization. Delegation Logic extends logic programming (LP) languages with expressive delegation constructs that feature delegation depth and a wide variety of complex principals (including, but not limited to, k-out-of-n thresholds). D1LP, the monotonic version of DL, extends the LP language Datalog with delegation constructs. D2LP, the nonmonotonic version of DL, also features classical negation, negation-as-failure, and prioritized conflict handling. Our approach to defining and implementing DL is based on tractably compiling DL programs into ordinary logic programs (OLP’s). This compilation approach enables DL to be implemented modularly on top of existing technologies for OLP, e.g., Prolog. As a trust-management language, Delegation Logic provides a concept of proof-ofcompliance that is founded on well-understood principles of logic programming and knowledge representation. DL also provides a logical framework for studying delegation, negation of authority, conflicts between authorities, and their interplay.
- TR2000-798 2000 Local Names in SPKI/SDSI 2.0 Li, Ninghui Abstract | PDF
- TR2000-799 2000 Variational Analysis of the Abscissa Mapping for Polynomials Overton, Michael; Burke, James V. Abstract | PDF
- TR2000-797 2000 A Feti preconditioner for two dimensional edge element approximations of Maxwell's equations on non-matching grids Rapetti, F.; Toselli, A. Abstract | PDF
-
TR2000-807
2000
A Numerical Study of FETI Algorithms for Mortar Finite Element Methods
Stefanica, Dan
Abstract
|
PDF
Title: A Numerical Study of FETI Algorithms for Mortar Finite Element Methods
Author(s): Stefanica, Dan
Abstract:
The Finite Element Tearing and Interconnecting (FETI) method is an iterative substructuring method using Lagrange multipliers to enforce the continuity of the finite element solution across the subdomain interface. Mortar finite elements are nonconforming finite elements that allow for a geometrically nonconforming decomposition of the computational domain into subregions and, at the same time, for the optimal coupling of different variational approximations in different subregions. We present a numerical study of FETI algorithms for elliptic self-adjoint equations discretized by mortar finite elements. Several preconditioners which have been successful for the case of conforming finite elements are considered. We compare the performance of our algorithms when applied to classical mortar elements and to a new family of biorthogonal mortar elements and discuss the differences between enforcing mortar conditions instead of continuity conditions for the case of matching nodes across the interface. Our experiments are carried out for both two and three dimensional problems, and include a study of the relative costs of applying different preconditioners for mortar elements.
-
TR2000-804
2000
Domain Decomposition Methods for Mortar Finite Elements
Stefanica, Dan
Abstract
|
PDF
Title: Domain Decomposition Methods for Mortar Finite Elements
Author(s): Stefanica, Dan
Abstract:
Domain decomposition methods are powerful iterative methods for solving systems of algebraic equations arising from the discretization of partial differential equations by, e.g., finite elements. The computational domain is decomposed into overlapping or nonoverlapping subdomains. The problem is divided into, or assembled from, smaller subproblems corresponding to these subdomains. In this dissertation, we focus on domain decomposition methods for mortar finite elements, which are nonconforming finite element methods that allow for a geometrically nonconforming decomposition of the computational domain into subregions and for the optimal coupling of different variational approximations in different subregions.
We introduce a FETI method for mortar finite elements, and provide numer- ical comparisons of FETI algorithms for mortar finite elements when different preconditioners, given in the FETI literature, are considered. We also analyze the complexity of the preconditioners for the three dimensional versions of the algorithms.
We formulate a variant of the balancing method for mortar finite elements, which uses extended local regions to account for the nonmortar sides of the subre- gions. We prove a polylogarithmic condition number estimate for our algorithm in the geometrically nonconforming case. Our estimate is similar to those for other Neumann{Neumann and substructuring methods for mortar finite elements.
In addition, we establish several fundamental properties of mortar finite elements: the existence of the nonmortar partition of any interface, the L^2 stability of the mortar projection for arbitrary meshes on the nonmortar side, and prove Friedrichs and Poincare inequalities for geometrically nonconforming mortar elements.
-
Ph.D. Thesis
2000
Queryable Expert Systems
Tanzer, David
Abstract
|
PDF
Title: Queryable Expert Systems
Candidate: Tanzer, David
Abstract:
No Title
DEPARTMENT OF COMPUTER SCIENCE
DOCTORAL DISSERTATION DEFENSE
Candidate: David Tanzer
Advisor: Dennis Shasha
Queryable Expert Systems
10:00 a.m., Tuesday, October 17, 2000
12th floor conference room, 719 Broadway
Abstract
Interactive rule-based expert systems, which work by ``interviewing'' their users, have found applications in fields ranging from aerospace to help desks. Although they have been shown to be useful, people find them difficult to query in flexible ways. This limits the reusability of the knowledge they contain. Databases and noninteractive rule systems such as logic programs, on the other hand, are queryable but they do not offer an interview capability. This thesis is the first investigation that we know of into query-processing for interactive expert systems.
In our query paradigm, the user describes a hypothetical condition and then the system reports which of its conclusions are reachable, and which are inevitable, under that condition. For instance, if the input value for bloodSugar exceeds 100 units, is the conclusion diabetes then inevitable? Reachability problems have been studied in other settings, e.g., the halting problem, but not for interactive expert systems.
We first give a theoretical framework for query-processing that covers a wide class of interactive expert systems. Then we present a query algorithm for a specific language of expert systems. This language is a restriction of production systems to an acyclic form that generalizes decision trees and classical spreadsheets. The algorithm effects a reduction from the reachability and inevitability queries into datalog rules with constraints. When preconditions are conjunctive, the data complexity is tractable. Next, we optimize for queries to production systems that contain regions which are decision trees. When general-purpose datalog methods are applied to the rules that result from our queries, the number of constraints that must be solved is O ( n 2 ), where n is the size of the trees. We lower the complexity to O ( n ). Finally, we have built a query tool for a useful subset of the acyclic production systems. To our knowledge, these are the first interactive expert systems that can be queried about the reachability and inevitability of their conclusions.
-
TR2000-800
2000
FETI domain decomposition methods for scalar advection-diffusion problems
Toselli, A.
Abstract
|
PDF
Title: FETI domain decomposition methods for scalar advection-diffusion problems
Author(s): Toselli, A.
Abstract:
In this paper, we show that iterative substructuring methods of Finite Element Tearing and Interconnecting type can be successfully employed for the solution of linear systems arising from the finite element approximation of scalar advection-diffusion problems. Using similar ideas as those of a recently developed Neumann-Neumann method, we propose a one-level algorithm and a class of two-level algorithms, obtained by suitably modifying the local problems on the subdomains. We present some numerical results for some significant test cases. Our methods appear to be optimal for flows without closed streamlines and possibly very small values of the viscosity. They also show very good performances for rotating flows and moderate Reynolds numbers. Therefore, the algorithms proposed appear to be well-suited for many convection-dominated problems of practical interest.
-
TR2000-806
2000
hp-finite element approximations on non-matching grids for partial differential equations with non-negative characteristic form
Toselli, Andrea
Abstract
|
PDF
Title: hp-finite element approximations on non-matching grids for partial differential equations with non-negative characteristic form
Author(s): Toselli, Andrea
Abstract:
We propose and analyze a domain decomposition method on non-matching grids for partial differential equations with non-negative characteristic form. No weak or strong continuity of the finite element functions, their normal derivatives, or linear combinations of the two is imposed across the boundaries of the subdomains. Instead, we employ suitable bilinear forms defined on the common interfaces, typical of discontinuous Galerkin approximations. We prove an error bound which is optimal with respect to the mesh-size and suboptimal with respect to the polynomial degree. Our analysis is valid for arbitrary shape-regular meshes and arbitrary partitions into subdomains. Our method can be applied to advective, diffusive, and mixed-type equations, as well, and is well-suited for problems coupling hyperbolic and elliptic equations.
-
Ph.D. Thesis
2000
Scenario Customization for Information Extraction
Yangarber, Roman
Abstract
|
PDF
Title: Scenario Customization for Information Extraction
Candidate: Yangarber, Roman
Advisor(s): Grishman, Ralph
Abstract:
Information Extraction (IE) is an emerging NLP technology, whose function is to process unstructured, natural language text, to locate specific pieces of information, or facts , in the text, and to use these facts to fill a database. IE systems today are commonly based on pattern matching. The core IE engine uses a cascade of sets of patterns of increasing linguistic complexity. Each pattern consists of a regular expression and an associated mapping from syntactic to logical form. The pattern sets are customized for each new topic , as defined by the set of facts to be extracted.
Construction of a pattern base for a new topic is recognized as a time-consuming and expensive process--a principal roadblock to wider use of IE technology in the large. An effective pattern base must be precise and must have wide coverage. This thesis addresses the portability problem in two stages.
First, we introduce a set of tools for building patterns manually from examples . To adapt the IE system to a new subject domain quickly, the user chooses a set of example sentences from a training text, and specifies how each example maps to the extracted event--its logical form. The system then applies meta-rules to transform the example automatically into a general set of patterns. This effectively shifts the portability bottleneck from building patterns to finding good examples.
Second, we propose a novel methodology for discovering good examples automatically from a large un-annotated corpus of text. The system is initially seeded with a small set of relevant patterns provided by the user. An unsupervised learning procedure then identifies new patterns and classes of related terms on successive iterations. We present experimental results, which confirm that the discovered patterns exhibit high quality, as measured in terms of precision and recall.
-
TR2000-805
2000
Expressing and Enforcing Distributed Resource Sharing Agreements
Zhao, Tao;
Karamcheti, Vijay
Abstract
|
PDF
Title: Expressing and Enforcing Distributed Resource Sharing Agreements
Author(s): Zhao, Tao; Karamcheti, Vijay
Abstract:
Advances in computing and networking technology, and an explosion in information sources has resulted in a growing number of distributed systems getting constructed out of resources contributed by multiple sources. Use of such resources is typically governed by sharing agreements between owning principals, which limit both who can access a resource and in what quantity. Despite their increasing importance, existing resource management infrastructures offer only limited support for the expression and enforcement of sharing agreements, typically restricting themselves to identifying compatible resources. In this paper, we present a novel approach building on the concepts of tickets and currencies to express resource sharing agreements in an abstract, dynamic, and uniform fashion. We also formu-late the allocation problem of enforcing these agreements as a linear-programming model, automatically factoring the transitive availability of resources via chained agreements. A case study modeling resource sharing among ISP-level web proxies shows the benefits of enforcing transitive agreements: worst-case waiting times of clients accessing these proxies improves by up to two orders of magnitude.
-
Ph.D. Thesis
1999
Higher-Order Conditional Synchronization
Afshartous, Niki
Abstract
|
PDF
Title: Higher-Order Conditional Synchronization
Candidate: Afshartous, Niki
Advisor(s): Goldberg, Benjamin
Abstract:
Conditional synchronization - a mechanism that conditionally blocks a thread based on the value of a boolean expression currently exists in several programming languages. We propose promoting conditional synchronization to first-class status allowing the synchronization object representing a suspended conditional synchronization to be passed as a value.
To demonstrate our idea we extend Concurrent ML and present several examples illustrating the expressiveness of first-class conditional synchronization (FCS). FCS has broadcast semantics making it appropriate for applications such as barriers and discrete-event simulation. The semantics also guarantee that no transient store configurations are missed. The end result facilitates abstraction and adds flexibility in writing concurrent programs. To minimize re-evaluation of synchronization conditions we propose a static analysis and translation that identifies expressions for the run-time system that could affect the value of a synchronization condition. The static analysis (which is based on an effect type system) therefore precludes excessive run-time system polling of synchronization conditions.
-
Ph.D. Thesis
1999
Metacomputing on on Commodity Computers
Baratloo, Arash
Abstract
|
PDF
Title: Metacomputing on on Commodity Computers
Candidate: Baratloo, Arash
Advisor(s): Kedem, Zvi
Abstract:
The advantages of using a set of networked commodity computers for parallel processing is well understood: such computers are cheap, widely available, and mostly underutilized. So why has the use of such environments for compute-intensive applications not proliferated? A major reason is that the inherent complexities of programming applications and coordinating their execution on networked computers outweighs the advantages.
In networked environments populated with multiuser commodity computers, both the computing speed and the number of available computers for executing parallel programs may change frequently and unpredictably. As a consequence, programs need to continuously adapt their execution to the changing environment. The execution of an application must therefore address such issues as dynamic changes in effective machine speeds, dynamic changes in the number of available machines, and sudden network and machine failures. It is not feasible for an application programmer to write programs that adapt to the behavior of a system whose critical aspects cannot be anticipated.
I will present a unified set of techniques to implement a virtual reliable parallel-processing platform on a set of unreliable computers with temporally varying execution speeds. These techniques are specifically designed for automatically adapting the execution of parallel programs to distributed environments. I will explain these techniques in the context of two software systems, Calypso and ResourceBroker, that have been built to validate them.
Calypso gives a programmer a simple tool to build and effectively execute parallel programs on a set of commodity computers. The notable properties of Calypso are: (1) a simple, intuitive programming model based on a virtual machine interface; (2) separation of logical and physical parallelism, allowing the source code to codify the algorithm rather than the execution environment; and (3) a runtime system that efficiently adapts the execution of the program to the dynamic nature of the runtime environment. ResourceBroker is a resource manager that demonstrates a novel technique to dynamically manage the assignment of computers to parallel programs. ResourceBroker can work with a variety of parallel systems, even transparently managing those that are not aware of its existence, such as PVM and MPI, and will distribute available resources fairly among multiple computations. As a result, a mix of parallel programs, written using diverse programming systems can effectively execute concurrently on a set of computers.
-
TR1999-778
1999
Comic Strips for Algorithm Visualization
Biermann, H.;
Cole, R.
Abstract
|
PDF
Title: Comic Strips for Algorithm Visualization
Author(s): Biermann, H.; Cole, R.
Abstract:
This paper presents visualizations of binary search trees and splay trees. The visualizations comprise sequences of figures or frames, called comic strips. Consecutive frames are viewed two at a time to facilitate user (viewer) understanding of the algorithm steps. The visualizations are implemented in Java to facilitate their wide use. This paper explores several other considerations in the design of instructional visualizations.
-
TR1999-781
1999
Piecewise Smooth Subdivision Surfaces with Normal Control
Biermann, H.;
Levin, A.; Zorin, D.
Abstract
|
PDF
Title: Piecewise Smooth Subdivision Surfaces with Normal Control
Author(s): Biermann, H.; Levin, A.; Zorin, D.
Abstract:
In this paper we introduce improved rules for Catmull-Clark and Loop subdivision that overcome several problems with the original schemes (lack of smoothness at extraordinary boundary vertices, folds near concave corners). In addition, our approach to rule modification allows generation of surfaces with prescribed normals, both on the boundary and in the interior, which considerably improves control of the shape of surfaces.
-
TR1999-784
1999
Stateless Remote Environment Navigation with View Compression
Biermann, H.;
Hertzmann, A.; Meyer, J.; Perlin, K.
Abstract
|
PDF
Title: Stateless Remote Environment Navigation with View Compression
Author(s): Biermann, H.; Hertzmann, A.; Meyer, J.; Perlin, K.
Abstract:
We present a set of very low bandwidth techniques for navigating remote environments. In a typical setup using our system, a virtual environment resides on a server machine, and one or more users explore the environment from client machines. Each client uses previous views of the environment to predict the next view, using the known camera motion and image-based rendering techniques. The server performs the same prediction, and sends only the difference between the predicted and actual view. Compressed difference images require significantly less bandwidth than the compressed images of each frame, and thus can yield much higher frame rates. To request a view, the client simply sends the coordinates of the desired view and of the previous view to the server. This avoids the overhead of maintaining connections between the server and each client.
No restrictions are placed on the scene or the camera motions; the view compression technique may be used with arbitrarily complex 3D scenes or dynamically changing views from a web camera or a digital television broadcast. A lossy compression scheme is presented in which the client estimates the cumulative error in each frame, and requests a comprete refresh before errors become noticable.
This work is applicable to remote exploration of virtual worlds such as on head-mounted displays, Digital Television, or over the Internet.
-
Ph.D. Thesis
1999
A Maximum Entropy Approach to Named Entity Recognition
Borthwick, Andrew
Abstract
|
PDF
Title: A Maximum Entropy Approach to Named Entity Recognition
Candidate: Borthwick, Andrew
Advisor(s): Grishman, Ralph
Abstract:
This thesis describes a novel statistical named-entity (i.e. ``proper name'') recognition system known as ``MENE'' (Maximum Entropy Named Entity). Named entity (N.E.) recognition is a form of information extraction in which we seek to classify every word in a document as being a person-name, organization, location, date, time, monetary value, percentage, or ``none of the above''. The task has particular significance for Internet search engines, machine translation, the automatic indexing of documents, and as a foundation for work on more complex information extraction tasks.
Two of the most significant problems facing the constructor of a named entity system are the questions of portability and system performance. A practical N.E. system will need to be ported frequently to new bodies of text and even to new languages. The challenge is to build a system which can be ported with minimal expense (in particular minimal programming by a computational linguist) while maintaining a high degree of accuracy in the new domains or languages.
MENE attempts to address these issues through the use of maximum entropy probabilistic modeling. It utilizes a very flexible object-based architecture which allows it to make use of a broad range of knowledge sources in making its tagging decisions. In the DARPA-sponsored MUC-7 named entity evaluation, the system displayed an accuracy rate which was well-above the median, demonstrating that it can achieve the performance goal. In addition, we demonstrate that the system can be used as a post-processing tool to enhance the output of a hand-coded named entity recognizer through experiments in which MENE improved on the performance of N.E. systems from three different sites. Furthermore, when all three external recognizers are combined under MENE, we are able to achieve very strong results which, in some cases, appear to be competitive with human performance.
Finally, we demonstrate the trans-lingual portability of the system. We ported the system to two Japanese-language named entity tasks, one of which involved a new named entity category, ``artifact''. Our results on these tasks were competitive with the best systems built by native Japanese speakers despite the fact that the author speaks no Japanese.
-
TR1999-787
1999
Recovering Non-Rigid 3D Shape from Image Streams
Bregler, C.;
Hertzmann, A.; Biermann, H.
Abstract
|
PDF
Title: Recovering Non-Rigid 3D Shape from Image Streams
Author(s): Bregler, C.; Hertzmann, A.; Biermann, H.
Abstract:
This paper addresses the problem of recovering 3D non-rigid shape models from image sequences. For example, given a video recording of a talking person, we would like to estimate a 3D model of the lips and the full head and its internal modes of variation. Many solutions that recover 3D shape from 2D image sequences have been proposed; these so-called structure-from-motion techniques usually assume that the 3D object is rigid. For example, Tomasi and Kanade's factorization technique is based on a rigid shape matrix, which produces a tracking matrix of rank 3 under orthographic projection. We propose a novel technique based on a non-rigid model, where the 3D shape in each frame is a linear combination of a set of basis shapes. Under this model, the tracking matrix is of higher rank, and can be factored in a three step process to yield to pose, configuration and shape. We demonstrate this simple but effective algorithm on video sequences of speaking people. We were able to recover 3D non-rigid facial models with high accuracy.
-
Ph.D. Thesis
1999
Algorithms for Nonlinear Models in Computational Finance and their Object-oriented Implementation
Buff, Robert
Abstract
|
PDF
Title: Algorithms for Nonlinear Models in Computational Finance and their Object-oriented Implementation
Candidate: Buff, Robert
Advisor(s): Avellaneda, Marco
Abstract:
Individual components of financial option portfolios cannot be evaluated independently under nonlinear models in mathematical finance. This entails increased algorithmic complexity if the options under consideration are path-dependent. We describe algorithms that price portfolios of vanilla, barrier and American options under worst-case assumptions in an uncertain volatility setting. We present a generalized approach to worst-case volatility scenarios in which only the duration, but not the starting dates of periods of high volatility risk are known. Our implementation follows object-oriented principles and is modular and extensible. Combinatorial and numerical algorithms are separate and orthogonal to each other. We make our tools available to a wide audience by using standard Internet technologies.
-
TR1999-791
1999
Optimizing Matrix Stability
Burke, J. V.;
Lewis, A. S.; Overton, M. L.
Abstract
|
PDF
Title: Optimizing Matrix Stability
Author(s): Burke, J. V.; Lewis, A. S.; Overton, M. L.
Abstract:
Given an affine subspace of square matrices, we consider the problem of minimizing the spectral abscissa (the largest real part of an eigenvalue). We give an example whose optimal solution has Jordan form consisting of a single Jordan block, and we show, using non-lipschitz variational analysis, that this behaviour persists under arbitrary small perturbations to the example. Thus although matrices with nontrivial Jordan structure are rare in the space of all matrices, they appear naturally in spectral abscissa minimization.
-
TR1999-790
1999
Variational Analysis of Non-Lipschitz Spectral Functions
Burke, J. V.;
Overton, M. L.
Abstract
|
PDF
Title: Variational Analysis of Non-Lipschitz Spectral Functions
Author(s): Burke, J. V.; Overton, M. L.
Abstract:
We consider spectral functions f o lambda, where f is any permutation-invariant mapping from C^n to R, and lambda is the eigenvalue map from C^{n X n} to C^n, ordering the eigenvalues lexicographically. For example, if f is the function "maximum real part", then f o lambda is the spectral abscissa, while if f is "maximum modulus", then f o lambda is the spectral radius. Both these spectral functions are continuous, but they are neither convex nor Lipschitz. For our analysis, we use the notion of subgradient extensively analyzed in Variational Analysis, R.T. Rockafellar and R. J.-B. Wets (Springer, 1998), which is particularly well suited to the variational analysis of non-Lipschitz spectral functions. We derive a number of necessary conditions for subgradients of spectral functions. For the spectral abscissa, we give both necessary and sufficient conditions for subgradients, and precisely identify the case where subdifferential regularity holds. We conclude by introducing the notion of semistable programming: minimizing a linear function of a matrix subject to linear constraints, together with the constraint that the eigenvalues of the matrix all lie in the right half-plane or on the imaginary axis. This is a generalization of semidefinite programming for non-Hermitian matrices. Using our analysis, we derive a necessary condition for a local minimizer of a semistable program, and give a generalization of the complementarity condition familiar from semidefinite programming.
-
TR1999-793
1999
Automatic Configuration and Run-time Adaptation of Distributed Applications
Chang, F.;
Karamcheti, V.
Abstract
|
PDF
Title: Automatic Configuration and Run-time Adaptation of Distributed Applications
Author(s): Chang, F.; Karamcheti, V.
Abstract:
Current technology trends point towards both an increased heterogeneity in hardware platforms and an increase in the mechanisms available to applications for controlling how these platforms are utilized. These trends motivate the design of resource-aware distributed applications, which proactively monitor and control utilization of the underlying platform, ensuring a desired performance level by adapting their behavior to changing resource characteristics.
This paper describes a general framework for enabling application adaptation on distributed platforms. The framework combines programmer specification of alternate execution behaviors (configurations) with automatic support for deciding when and how to adapt, relying extensively on two components: (1) profile-based modeling of application behavior, automatically generated by measuring application performance in a virtual execution environment with controllable resource consumption, and (2)application-specific continuous monitoring of current resource characteristics. The latter detects when application configurations need to change while the former guides the selection of a new configuration.
We evaluate these framework components using an interactive image visualization application. Our results demonstrate that starting from a natural specification of alternate application behaviors and an automatically generated performance database, our framework permits the application to both configure itself in diverse distributed environments and adapt itself to run-time changes in resource characteristics so as to satisfy user preferences of output quality.
-
TR1999-795
1999
Secure, User-level Resource-constrained Sandboxing
Chang, F.;
Itzkovitz, A.; Karamcheti, V.
Abstract
|
PDF
Title: Secure, User-level Resource-constrained Sandboxing
Author(s): Chang, F.; Itzkovitz, A.; Karamcheti, V.
Abstract:
The popularity of mobile and networked applications has resulted in an increasing demand for execution ``sandboxes''---environments that impose irrevocable qualitative and quantitative restrictions on resource usage. Existing approaches either verify application compliance to restrictions at start time (e.g., using certified code or language-based protection) or enforce it at run time (e.g., using kernel support, binary modification, or active interception of the application's interactions with the operating system). However, their general applicability is constrained by the fact that they are either too heavyweight and inflexible, or are limited in the kinds of sandboxing restrictions and applications they can handle.
This paper presents a secure user-level sandboxing approach for enforcing both qualitative and quantitative restrictions on resource usage of applications in distributed systems. Our approach actively monitors an application's interactions with the underlying system, proactively controlling it as desired to enforce the desired behavior. Our approach leverages a core set of user-level mechanisms that are available in most modern operating systems: fine-grained timers, monitoring infrastructure (e.g., the /proc filesystem), debugger processes, priority-based scheduling, and page-based memory protection. We describe implementations of a sandbox that imposes quantitative restrictions on CPU, memory, and network usage on two commodity operating systems: Windows NT and Linux. Our results show that application usage of resources can be restricted to within 3% of desired limits with minimal run-time overhead.
-
Ph.D. Thesis
1999
Prototyping a Prototyping Language
Chen, Hseu-Ming
Abstract
|
PDF
Title: Prototyping a Prototyping Language
Candidate: Chen, Hseu-Ming
Advisor(s): Harrison, Malcolm C.
Abstract:
The development of a prototyping language should follow the usual software-engineering methodology: starting with an evolvable, easily modifiable, working prototype of the proposed language. Rather than committing to the development of a mammoth compiler at the outset, we can design a translator from the prototyping language to another high-level language as a viable alternative. From a software-engineering point of view, the advantages of the translator approach are its shorter development cycle and lessened maintenance burden.
In prototyping language design, there are often innovative cutting-edge features which may not be well-understood. It is inevitable that numerous experimentations and revisions will be made to the current design, and hence supporting evolvability and modifiability is critical in the translator design.
In this dissertation we present an action-semantics-based framework for high-level source-to-source language translation. Action semantics is a form of denotational semantics that is based on abstract semantic algebra rather than Scott domain and lambda-notation. More specifically, this model not only provides a formal semantics definition for the source language and sets guidelines for implementations as well as migration, but also facilitates mathematical reasoning and a correctness proof of the entire translation process. The translation is geared primarily towards readability, maintainability, and type-preserving target programs, only secondarily towards reasonable efficiency.
We have acquired a collection of techniques for the translation of certain non-trivial high-level features of prototyping languages and declarative languages into efficient procedural constructs in imperative languages like Ada95, while using the abstraction mechanism of the target languages to maximize the readability of the target programs. In particular, we translate Griffin existential types into Ada95 using its object-oriented features, based on coercion calculus. This translation is actually more general, in that one can add existential types to a language (with modicum of extra syntax) supporting object-oriented paradigm without augmenting its type system, through intra-language transformation. We also present a type-preserving translation of closures which allows us to drop the whole-program-transformation requirement.
-
TR1999-792
1999
Edge-Coloring Bipartite Multigraphs in $0(E\log D)$ Time
Cole, R.;
Ost, K.; Schirra, S.
Abstract
|
PDF
Title: Edge-Coloring Bipartite Multigraphs in $0(E\log D)$ Time
Author(s): Cole, R.; Ost, K.; Schirra, S.
Abstract:
Let $V$, $E$, and $D$ denote the cardinality of the vertex set, the cardinality of the edge set, and the maximum degree of a bipartite multigraph $G$. We show that a minimal edge-coloring of $G$ can be computed in $O(E\log D)$ time.
-
TR1999-789
1999
Randomized Swap Matching in $O(m \log m \log |\Sigma| )$ time
Cole, R.;
Hariharan, R.
Abstract
|
PDF
Title: Randomized Swap Matching in $O(m \log m \log |\Sigma| )$ time
Author(s): Cole, R.; Hariharan, R.
Abstract:
We give a randomized algorithm for the {\em Pattern Matching with Swaps} problem which runs in $O(m \log m \log |\Sigma| )$ time on a text of length $2m-1$ and a pattern of length $m$ drawn from an alphabet set of size $|\Sigma|$. This algorithm gives the correct answer with probability at least $1-\frac{1}{m}$ and does not miss a match. The best deterministic algorithm known for this problem takes $O(m^{4/3} \mbox{polylog}(m))$ time.
-
Ph.D. Thesis
1999
Distributed intelligence with bounded rationality: Applications to economies and networks
Even, Ron
Abstract
|
PDF
Title: Distributed intelligence with bounded rationality: Applications to economies and networks
Candidate: Even, Ron
Advisor(s): Mishra, Bud
Abstract:
This dissertation examines bounded rationality as a tool in distributed systems of intelligent agents. We have implemented, in Java, a simulator for complex adaptive systems called CAF??. We use our framework to simulate a simple network and compare the effectiveness of bounded rationality at routing and admission control to that of a more traditional, source based, greedy routing approach. We find that the boundedly rational approach is particularly effective when user behavior is synchronized, such as occurs during breaking news releases on the World Wide Web, for example. We develop the key structures of our framework by first examining, through simulation, the behavior of boundedly rational speculators in a simple economy. We find them to be instrumental in bringing the economy quickly to price equilibrium as well as in maintaining the equilibrium in the face of changing conditions. We draw several interesting conclusions as to the key similarities between economy and computational systems and also, the situations where they differ drastically.
-
Ph.D. Thesis
1999
Pattern Discovery in Biology: Theory and Applications
Floratos, Aristidis
Abstract
|
PDF
Title: Pattern Discovery in Biology: Theory and Applications
Candidate: Floratos, Aristidis
Advisor(s): Boppana, Ravi; Rigoutsos, Isidore
Abstract:
Molecular Biology studies the composition and interactions of life's agents, namely the various molecules (e.g. DNA, proteins, lipids) sustaining the living process. Traditionally, this study has been performed in wet labs using mostly physicochemical techniques. Such techniques, although precise and detailed, are often cumbersome and time consuming. On top of that, recent advances in sequencing technology have allowed the rapid accumulation of DNA and protein data. As a result a gap has been created (and is constantly being expanded): on the one side there is a rapidly growing collection of data containing all the information upon which life is built; and on the other side we are currently unable to keep up with the study of this data, impaired by the limits of existing analysis tools. It is obvious that alternative analysis techniques are badly needed. In this work we examine how computational methods can help in drilling the information contained in collections of biological data. In particular, we investigate how sequence similarity among various macromolecules (e.g. proteins) can be exploited towards the extraction of biologically useful information.
-
Ph.D. Thesis
1999
Matching Algorithms and Feature Match Quality Measures for Model-Based Object Recognition with Applications toAutomatic Target Recognition
Garcia-Keller, Martin
Abstract
|
PDF
Title: Matching Algorithms and Feature Match Quality Measures for Model-Based Object Recognition with Applications toAutomatic Target Recognition
Candidate: Garcia-Keller, Martin
Advisor(s): Hummel, Robert
Abstract:
In the fields of computational vision and image understanding, the object recognition problem can often be formulated as a problem of matching a collection of model features to features extracted from an observed scene. This dissertation is concerned with the use of feature-based match similarity measures and feature match algorithms in object detection and classification in the context of image understanding from complex signature data. Our applications are in the domains of target vehicle recognition from radar imagery, and binocular stereopsis.
In what follows, we will consider “image understanding” to encompass the set of activities necessary to identify objects in visual imagery and to establish meaningful three-dimensional relationships between the objects themselves, or between the object and the viewer. The main goal in image understanding then involves the transformation of images to symbolic representation, effectively providing a high-level description of an image in terms of objects, object attributes, and relationships between known objects. As 2 such, image understanding subsumes the capabilities traditionally associated with image processing, object recognition and artificial vision [Crevier and Lepage 1997].
In human and/or biological vision systems, the task of object recognition is a natural and spontaneous one. Humans can recognize immediately and without effort a huge variety of objects from diverse perceptual cues and multiple sensorial inputs. The operations involved are complex and inconspicuous psychophysical and biological processes, including the use of properties such as shape, color, texture, pattern, motion, context, as well as considerations based on contextual information, prior knowledge, expectations, functionality hypothesis, and temporal continuity. These operations and their relation to machine object recognition and artificial vision are discussed in detail elsewhere [Marr 1982], [Biederman 1985], but they are not our concern in this thesis.
In this research, we consider only the simpler problem of model-based vision, where the objects to be recognized come from a library of three-dimensional models known in advance, and the problem is constrained using context and domain-specific knowledge.
The relevance of this work resides in its potential to support state-of-the-art developments in both civilian and military applications including knowledge-based image analysis, sensors exploitation, intelligence gathering, evolving databases, 3 interactive environments, etc. A large number of applications are reviewed below in section 1.4. Experimental results are presented in Chapters 5, 6, and
-
TR1999-777
1999
An Improved Intra-procedural May-alias Analysis Algorithm
Goyal, D.
Abstract
|
PDF
Title: An Improved Intra-procedural May-alias Analysis Algorithm
Author(s): Goyal, D.
Abstract:
Hind et al.~\cite({Hind99}) use a standard data flow framework \cite{Rosen79, Tarjan81} to formulate an intra-procedural may-alias computation. The intra-procedural aliasing information is computed by applying well-known iterative techniques to the Sparse Evaluation Graph (SEG) (\cite{Choi91}). The computation requires a transfer function for each node that causes a potential pointer assignment (relating the data flow information flowing into and out of the node), and a set of aliases holding at the entry node of the SEG. The intra-procedural analysis assumes that precomputed information in the form of summary functions is available for all function-call sites in the procedure being analyzed. The time complexity of the intra-procedural may-alias computation for the algorithm presented by Hind et al.~(\cite{Hind99}) is $O(N^6)$ in the worst case (where $N$ is the size of the SEG). In this paper we present a worst case $O(N^3)$ time algorithm to compute the same may-alias information.
-
Ph.D. Thesis
1999
Learning to Play Network Games
Greenwald, Amy
Abstract
|
PDF
Title: Learning to Play Network Games
Candidate: Greenwald, Amy
Advisor(s): Mishra, Bud
Abstract:
This talk concerns the strategic behavior of automated agents in the framework of network game theory, with particular focus on the collective behavior that arises via learning. In particular, ideas are conveyed on both the theory and simulation of learning in network games, in terms of two sample applications. The first application is network control, presented via an abstraction known as the Santa Fe bar problem, for which it is proven that rational learning does *not* converge to Nash equilibrium, the classic game-theoretic solution concept. On the other hand, it is observed via simulations, that low-rationality learning, where agents trade-off between exploration and exploitation, typically converges to mixed strategy Nash equilibria in this game. The second application is the economics of shopbots - agents that automatically search the Internet for price and product information - in which learning yields behaviors ranging from price wars to tacit collusion, with sophisticated low-rationality learning algorithms converging to Nash equilibria. This work forms part of a larger research program that advocates learning and game theory as a framework in which to model the interactions of computational agents in network domains.
-
Ph.D. Thesis
1999
Experiments in refining graphical interface widgets
Hecker, Yaron Chanoch
Abstract
|
PDF
Title: Experiments in refining graphical interface widgets
Candidate: Hecker, Yaron Chanoch
Abstract:
This thesis investigates GUIs and their shortcomings. We demonstrate that there is room for refinement of existing graphical user interfaces, including those interfaces with which we are most familiar. A foundation for our designs is first established. It consists of known human capabilities, especially concerning hand-eye coordination, short term and long term memory, and visual perception. Accumulated experience in static and animated visual design provides additional guides for our work. On the basis of this foundation we analyze existing widgets. A series of new widgets are then proposed to address observed deficiencies in existing designs for scrolling, multiple copy and paste in text environments, text insertion and selection, and window management. Lessons learned from analyzing our new designs and observations of existing widgets are generalized into principles of widget design.
-
TR1999-783
1999
Interactive 3D Scene Reconstruction from Images
Hertzmann, A.
Abstract
|
PDF
Title: Interactive 3D Scene Reconstruction from Images
Author(s): Hertzmann, A.
Abstract:
We propose an interactive framework for reconstructing an arbitrary 3D scene consistent with a set of images, for use in example-based image synthesis. Previous research has used human input to specify feature matches, which are then processed off-line; however, it is very difficult to correctly match images without feedback. The central idea of this paper is to perform and display 3D reconstruction during user modification. By allowing the user to interactively manipulate the image correspondence and the resulting 3D reconstruction, we can exploit both the user's intuitive image understanding and the computer's processing power.
-
Ph.D. Thesis
1999
Automated Software Deployment
Jai, Benchiao
Abstract
|
PDF
Title: Automated Software Deployment
Candidate: Jai, Benchiao
Advisor(s): Siegel, Alan
Abstract:
The work users do with an application can be divided into actual work accomplished using the application and overhead performed in order to use the application. The latter can be further partitioned based on the time at which the work is performed: before (application location and delivery), during (installation) and after (upgrade) the installation of the application. This category can be characterized as the software deployment overhead. This thesis presents a component architecture RADIUS (Rapid Application location, Delivery, Installation and Upgrade System) in which applications can be built with no software deployment overhead to the users. An application is deployed automatically by simply giving the user a document produced by the application. Furthermore, the facilities in RADIUS make the applications self-upgrading. In the end, the users perform no deployment overhead work at all.
The conventional way of using an application is to install the application first, then start using documents of the application. The object-oriented programming (OOP) paradigm suggests that this order should be reversed: the data should lead to the code. However, almost all software fails to meet this model of design at the persistence level. While modern software often use OOP at the program level, the underlying operating systems do not support OOP at the document/file level. OOP languages use pointers to methods to indicate what operations can be performed on the objects. We extend the idea to include "pointers to applications". Each document has an attached application pointer, which is read by RADIUS when the document is opened. This application pointer is then used to locate and deliver the application module necessary for the document.
RADIUS is designed to be compatible with existing technologies and requires no extensions to either programming languages or operating systems. It is orthogonal to programming tools, is language-independent and compatible among operating systems, and consequently does not impose limitations on which environments the developers can use. We illustrate the implementations for the two most popular platforms today - C++ on Windows, and Java. RADIUS is also orthogonal to other component systems such as CORBA or COM and is easy to integrate with them.
-
TR1999-780
1999
A Domain Decomposition Method with Lagrange Multipliers for Linear Elasticity
Klawonn, A.;
Widlund, O. B.
Abstract
|
PDF
Title: A Domain Decomposition Method with Lagrange Multipliers for Linear Elasticity
Author(s): Klawonn, A.; Widlund, O. B.
Abstract:
A new domain decomposition method with Lagrange multipliers for elliptic problems is introduced. It is based on a reformulation of the well--known FETI method as a saddle point problem with both primal and dual variables as unknowns. The resulting linear system is solved with block--structured preconditioners combined with a suitable Krylov subspace method. This approach allows the use of inexact subdomain solvers for the positive definite subproblems. It is shown that the condition number of the preconditioned saddle point problem is bounded independently of the number of subregions and depends only polylogarithmically on the number of degrees of freedom of individual local subproblems. Numerical results are presented for a plane stress cantilever membrane problem.
- TR1999-796 1999 FETI and Neumann--Neumann Iterative Substructuring Methods: Connections and New Results Klawonn, A.; Widlund, O. Abstract | PDF
-
Ph.D. Thesis
1999
Toward Stronger User Authentication
Monrose, Newman Fabian
Abstract
|
PDF
Title: Toward Stronger User Authentication
Candidate: Monrose, Newman Fabian
Advisor(s): Kedem, Zvi
Abstract:
Password-based authentication is the dominant mechanism for verifying the identity of computer users, even though it is well known that people frequently choose passwords that are vulnerable to dictionary attacks. This talk addresses the issue of improving the security of password-based authentication, and presents authentication techniques that are more secure than traditional approaches against both on-line and off-line attacks.
We present a technique for strengthening the security of a textual password by augmenting it with biometric information such as the duration and latency of keystrokes during entry of the password. Thereby, both the password and the user's typing pattern are used to corroborate the user's identity. The technique presented adapts to gradual changes in a user's typing pattern while maintaining the same strengthened password across authenticated sessions. Moreover, our technique does not reveal which of a user's keystroke features are used to generate the corresponding strengthened password. This knowledge is hidden even from an attacker who captures all the system information used by the authentication server, and we show that our technique increases significantly the amount of work such an attacker must perform.
Additionally, we present an alternative technique for user authentication that exploits features of graphical input devices. We propose and evaluate ``graphical passwords'', which serve the same purpose as textual passwords, but consist of handwritten drawings, possibly in addition to text. Graphical passwords derive their strength from the fact that graphical input devices allow one to decouple the positions of inputs from the temporal order in which these inputs occur. We use this independence to build new password-based authentication schemes that are convincingly stronger than conventional methods.
-
Ph.D. Thesis
1999
Optimization Over Symmetric Cones
Nayakkankuppam, Madhu
Abstract
|
PDF
Title: Optimization Over Symmetric Cones
Candidate: Nayakkankuppam, Madhu
Advisor(s): Overton, Michael
Abstract:
We consider the problem of optimizing a linear function over the intersection of an affine space and a special class of closed, convex cones, namely the symmetric cones over the reals. This problem subsumes linear programming, convex quadratically constrained quadratic programming, and semidefinite programming as special cases. First, we derive some perturbation results for this problem class. Then, we discuss two solution methods: an interior-point method capable of delivering highly accurate solutions to problems of modest size, and a first order bundle method which provides solutions of low accuracy, but can handle much larger problems. Finally, we describe an application of semidefinite programming in electronic structure calculations, and give some numerical results on sample problems.
-
Ph.D. Thesis
1999
Efficient Computational Model for Energy Propagation in Geoemtrically Represented Large Envirnoments
Rajkumar, Ajay
Abstract
|
PDF
Title: Efficient Computational Model for Energy Propagation in Geoemtrically Represented Large Envirnoments
Candidate: Rajkumar, Ajay
Advisor(s): Perlin, Ken
Abstract:
Current radio propagation algorithms are very narrowly focused to specific types of input models and do not scale well to an increase in the number of receiver locations or the number of polygons in an input model. In this dissertation, we look at the problem of efficiently computing energy propagation at radio frequencies in a range of geometrically defined environments from a given transmitter location and for various transmitter and receiver characteristics. To achieve this goal, we propose a unified approach to radio propagation for different types of input models and their combinations as well, by representing the geometry as a binary space partitioning tree and broadcasting energy from the source. The approach is both scalable to large input models as well as dynamically adapts to its scale without incurring unreasonable computational cost. The proposed approach is equally effective for acoustic modeling as well.
We present a new adaptive ray-beam tracing algorithm which initially tessellates the surface of a transmitter into four-sided polygons. Each polygon is cast as a beam which avoids arbitrarily large gaps or overlaps between adjacent beams. For fast intersection computation each beam carries information of its medial ray as well. As the computation proceeds a ray-beam is adaptively subdivided depending on various parameters. The proposed algorithm has sublinear time complexity in terms of the number of receiver locations.
Modeling diffraction off an edge of a wedge is important to compute radio signal that reaches the shadow region of the wedge. Storing these edges explicitly in a data structure can be very expensive for large input models and especially for terrain-based models that have significant elevation variations. We present a new runtime edge-detection algorithm instead of storing the edges statically and its adaptation to binary space partitioning tree represented environments.
We have developed a propagation prediction system called Propagate using these algorithms with good statistical correlation between predicted and measured results for a number of different input models. The proposed algorithms have been used to model several other important computations related to a cellular network of transmitters such as signal strength and path loss, delay spread, angular spread, carrier-to-interference ratio, and modeling of different antenna diversity schemes.
-
TR1999-776
1999
Memory Classification Analysis for Recursive C Structures
Schwartz, N.
Abstract
|
PDF
Title: Memory Classification Analysis for Recursive C Structures
Author(s): Schwartz, N.
Abstract:
The long-time quest of the parallelizing compiler community for effective aggregate summarization techniques has led to increasingly sophisticated array section representations. In this paper, we show how the latest of these can be used for nested C structure summarization. We then show how this summarization notation can be used to make Shape Analysis precise on arbitrarily low-level code. Combining these techniques, we show that an appropriate generalization of Memory Classification Analysis, originally presented for Fortran programs, provides a flow dependence summarization technique for C code as well, while avoiding code normalization compared with previous techniques. In so doing, we break down perhaps the final conceptual barriers in the construction of practical programmer-friendly C parallelizing compilers.
-
TR1999-779
1999
Parallel Programming for Everyone
Schwartz, N.
Abstract
|
PDF
Title: Parallel Programming for Everyone
Author(s): Schwartz, N.
Abstract:
This article proposes a novel architectural model which augments the latest developments in automatic program parallelization and distributed systems to achieve a level of practicality as yet unknown to either field. Today's premier automatic parallelization model is well suited to implementation on a network of commodity workstations (NOW) using only a very thin layer of software support. We describe a parallelizing compiler framework which greatly simplifies the parallelization of even highly complex sequential applications while producing extremely effective parallelizations for the NOW. We further show how our model greatly enhances programmer productivity through the use of minimally invasive C++ transformation techniques, aiding both debugging and portability.
-
TR1999-782
1999
Sparse Constant Propagation via Memory Classification Analysis
Schwartz, N.
Abstract
|
PDF
Title: Sparse Constant Propagation via Memory Classification Analysis
Author(s): Schwartz, N.
Abstract:
This article presents a novel Sparse Constant Propagation technique which provides a heretofore unknown level of practicality. Unlike other techniques which are based on data flow, it is based on the execution-order summarization sweep employed in Memory Classification Analysis (MCA), a technique originally developed for array dependence analysis. This methodology achieves a precise description of memory reference activity within a summary representation that grows only linearly with program size. Because of this, the collected sparse constant information need not be artificially limited to satisfy classical data flow lattice requirements, which constrain other algorithms to discard information in the interests of efficient termination. Sparse Constant Propagation is not only more effective within the MCA framework, but it in fact generalizes the framework. Original MCA provids the means to break only simple induction and reduction types of flow-dependences. The integrated framework provides the means to also break flow-dependences for which array values can be propagated.
-
Ph.D. Thesis
1999
Automatic Parallelization: An Incremental, Optimistic, Practical Approach
Schwartz, Naftali
Abstract
|
PDF
Title: Automatic Parallelization: An Incremental, Optimistic, Practical Approach
Candidate: Schwartz, Naftali
Advisor(s): Kedem, Zvi
Abstract:
The historic focus of Automatic Parallelization efforts has been limited in two ways. First, parallelization has generally been attempted only on codes which can be proven to be parallelizeable. Unfortunately, the requisite dependence analysis is undecidable, and today's applications demonstrate that this restriction is more than theoretical. Second, parallel program generation has generally been geared to custom multiprocessing hardware. Although a network of commodity workstations (NOW) could theoretically be harnessed to serve as a multiprocessing platform, the NOW has characteristics which are at odds with effective utilization.
This thesis shows that by restricting our attention to the important domain of ``embarrassingly parallel'' applications, leveraging existing scalable and efficient network services, and carefully orchestrating a synergy between compile-time transformations and a small runtime system, we can achieve a parallelization that not only works in the face of inconclusive program analysis, but is indeed efficient for the NOW. We optimistically parallelize loops whose memory access behavior is unknown, relying on the runtime system to provide efficient detection and recovery in the case of an overly optimistic transformation. Unlike previous work in speculative parallelization, we provide a methodology which is not tied to the Fortran language, making it feasible as a generally useful approach. Our runtime system implements Two-Phase Idempotent Eager Scheduling (TIES) for efficient network execution, providing an Automatic Parallelization platform with performance scalability for the NOW.
Our transformation divides the original program into a server and zero or more clients. The server program is a specialization of the original application with each parallel loop replaced with a scheduling call to the client which comprises the body of that parallel loop. The scheduler remotely executes the appropriate instances of this client on available machines.
We describe the transformation and runtime system in detail, and report on the automatic transformation achieved by our implementation prototype in two case studies. In each of these cases, we were able to automatically locate the important coarse-grained loops, construct a shared-memory layout, and generate appropriate server and client code. Furthermore, we show that our generated parallel programs achieve near-linear speedups for sufficiently large problem sizes.
-
TR1999-788
1999
A FETI Domain Decomposition Method for Maxwell's Equations with Discontinuous Coefficients in Two Dimensions
Toselli, A.;
Klawonn, A.
Abstract
|
PDF
Title: A FETI Domain Decomposition Method for Maxwell's Equations with Discontinuous Coefficients in Two Dimensions
Author(s): Toselli, A.; Klawonn, A.
Abstract:
A class of FETI methods for the edge element approximation of vector field problems in two dimensions is introduced and analyzed. First, an abstract framework is presented for the analysis of a class of FETI methods where a natural coarse problem, associated with the substructures, is lacking. Then, a family of FETI methods for edge element approximations is proposed. It is shown that the condition number of the corresponding method is independent of the number of substructures and grows only polylogarithmically with the number of unknowns associated with individual substructures. The estimate is also independent of the jumps of both of the coefficients of the original problem. Numerical results validating the theoretical bounds are given. The method and its analysis can be easily generalized to Raviart-Thomas element approximations in two and three dimensions.
-
TR1999-785
1999
Domain Decomposition Methods for Vector Field Problems
Toselli, A.
Abstract
|
PDF
Title: Domain Decomposition Methods for Vector Field Problems
Author(s): Toselli, A.
Abstract:
Finite element approximation of vector equations gives rise to very large, sparse linear systems. In this dissertation, we study some domain decomposition methods for finite element approximations of vector--valued problems, involving the curl and the divergence operators. Edge and Raviart--Thomas finite element are employed. Problems involving the curl operator arise, for instance, when approximating Maxwell's equations and the stream function--vorticity formulation of Stokes' problem, while mixed approximations of second order elliptic equations and stabilized mixed formulations of the Stoke' problem give rise to problems involving the divergence operator.
We first consider Maxwell's equations in three dimensional conductive media using implicit time--stepping. We prove that the condition number of a two-level overlapping algorithm is bounded independently of the number of unknowns, the number of subregions, and the time step.
For the same equation in two dimensions, we consider two new iterative substructuring methods. The first one is based on individual edges, while the second one is a Neumann-Neumann method. We show that the condition numbers of the corresponding methods increase slowly with the number of unknowns in each substructure, but are independent of the time step and even large jumps of the coefficients. We also analyze similar preconditioners for a three--dimensional vector problem involving the divergence operator, and prove that the preconditioners are quasi--optimal and scalable in this case as well.
For each method, we provide a series of numerical experiments, that confirm our theoretical analysis.
This work generalizes well--known results for scalar second order elliptic equations and has required the development of several new technical tools.
-
TR1999-786
1999
Neumann-Neumann Methods for Vector Field Problems
Toselli, A.
Abstract
|
PDF
Title: Neumann-Neumann Methods for Vector Field Problems
Author(s): Toselli, A.
Abstract:
In this paper, we study some Schwarz methods of Neumann-Neumann type for some vector field problems, discretized with the lowest order Raviart-Thomas and Nedelec finite elements. We consider a hybrid Schwarz peconditioner consisting of a coarse component, which involves the solution of the original problem on a coarse mesh, and local ones, which involve the solution of Neumann problems on the elements of the coarse triangulation, also called substructures. We show that the condition number of the corresponding method is independent of the number of substructures and grows logarithmically with the number of unknowns associated with an individual substructure. It is also independent of the jumps of both the coefficients of the original problem. The numerical results presented validate our theoretical bound.
-
TR1999-794
1999
Transparent Network Connectivity in Dynamic Cluster Environments
Xiadong, F.;
Wang, H.; Karamcheti, V.
Abstract
|
PDF
Title: Transparent Network Connectivity in Dynamic Cluster Environments
Author(s): Xiadong, F.; Wang, H.; Karamcheti, V.
Abstract:
Improvements in microprocessor and networking performance have made networks of workstations a very attractive platform for high-end parallel and distributed computing. However, the effective deployment of such environments requires addressing two problems not associated with dedicated parallel machines: heterogeneous resource capabilities and dynamic availability. Achieving good performance requires that application components be able to migrate between cluster resources and efficiently adapt to the underlying resource capabilities. An important component of the required support is maintaining network connectivity, which directly impacts on the transparency of migration to the application and its performance after migration. Unfortunately, existing approaches rely on either extensive operating system modifications or new APIs to maintain network connectivity, both of which limits their wider applicability.
This paper presents the design, implementation, and performance of a transparent network connectivity layer for dynamic cluster environments. Our design uses the techniques of API interception and virtualization to construct a transparent layer in user space; use of the layer requires no modification either to the application or the underlying operating system and messaging layers. Our layer enables the migration of application components without breaking network connections, and additionally permits adaptation to the characteristics of the underlying networking substrate. Experiments with supporting a persistent socket interface in two environments---an Ethernet LAN on top of TCP/IP, and a Myrinet LAN on top of Fast Messages---show that our approach incurs minimal overheads and can effectively select the best substrate for implementing application communication requirements.
-
Ph.D. Thesis
1999
Destructive Effect Analysis And Finite Differencing For Strict Functional Languages
Yung, Chung
Abstract
|
PDF
Title: Destructive Effect Analysis And Finite Differencing For Strict Functional Languages
Candidate: Yung, Chung
Advisor(s): Goldberg, Benjamin
Abstract:
Destructive update optimization is critical for writing scientific codes in functional languages. Pure functional languages do not allow mutations, destructive updates, or selective updates so that the straightforward implementations of functional languages induces large amounts of copying to preserve program semantics. The unnecessary copying of data can increase both the execution time and the memory requirements of an application. Destructive update optimization makes an essential improvement to the implementation of functional programs with compound data structures, such as arrays, sets, and aggregates. Moreover, for many of the compiler optimization techniques that depend on the side-effects, destructive update analysis provide the input for applying such optimization techniques. Among other compiler optimization techniques, finite differencing captures common yet distinctive program constructions of costly repeated calculations and transforms them into more efficient incremental program constructions.
In this dissertation, we develop a new approach to destructive update analysis, called destructive effect analysis . We present the semantic model and the abstract interpretation of destructive effect analysis. We designed EAS , an experimental applicative language with set expressions. The implementation of the destructive effect analysis is integrated with the optimization phase of our experimental compiler of EAS. We apply finite differencing to optimize pure functional programs, and we show the performance improvement that results from applying the finite differencing optimization together with the destructive update optimization.
-
TR1998-759
1998
Genomics via Optical Mapping II(A): Restriction Maps from Partial Molecules and Variations
Anantharaman, T.;
Mishra, B.
Abstract
|
PDF
Title: Genomics via Optical Mapping II(A): Restriction Maps from Partial Molecules and Variations
Author(s): Anantharaman, T.; Mishra, B.
Abstract:
In this paper, we extend an algorithmic approach to constructing ordered restriction maps from images of a population of individual DNA molecules (clones) digested by restriction enzymes. The original algorithm was capable of producing high-resolution, high-accuracy maps rapidly and in a scalable manner given a certain class of data errors, including contamination, sizing errors, false and missing restriction sites and unknown orientation. Here we extend this set of errors to include possibly broken molecules where the amount of breakage is not known beforehand, which is necessary for handling larger clones. In an earlier paper~\cite{optmapII}, we had shown that the problem of making maps from molecules with end fragments missing as the only source of error is NP-complete. We also show how to handle multiple reliability levels in the input data when calling restriction sites, where the actual reliability levels are not known and must be infered from the data.
-
TR1998-760
1998
Genomics via Optical Mapping III: Contiging Genomic DNA and Variations
Anantharaman, T.;
Mishra, B.; Schwartz, D.
Abstract
|
PDF
Title: Genomics via Optical Mapping III: Contiging Genomic DNA and Variations
Author(s): Anantharaman, T.; Mishra, B.; Schwartz, D.
Abstract:
In this paper, we describe our algorithmic approach to constructing an alignment (Contig) of a set of optical maps created from the images of individual genomic DNA molecules digested by restriction enzymes. Generally, these DNA segments are sized in the range of 1--4 Mb. The problem of assembling clone contig maps is a simpler special case of this contig problem and is handled by our algorithms. The goal is to devise contiging algorithms capable of producing high-quality composite maps rapidly and in a scalable manner. The resulting software is a key component of our physical mapping automation tools and has been used routinely to create composite maps of various microorganisms (E.coli, P.falciparum and D.radioduran). The experimental results appear highly promising.
-
TR1998-770
1998
Genomics via Optical Mapping (I): Probabilistic Analysis of Optical Mapping Models
Anantharaman, T.;
Mishra, B.
Abstract
|
PDF
Title: Genomics via Optical Mapping (I): Probabilistic Analysis of Optical Mapping Models
Author(s): Anantharaman, T.; Mishra, B.
Abstract:
We study several simple models for optical mapping and explore their power and limitations when applied to the construction of maps of clones (e.g., lambdas, cosmids, BACs and YACs). We provide precise lower and upper bounds on the number of clone molecules needed to create the correct map of the clone. Our probabilistic analysis shows that as the number of clone molecules is increased in the optical mapping data, the probability of successful computation of the map jumps from 0 to 1 for fairly small number of molecules (for typical values of the parameterS, the transition point is around 70 molecules). These observations have been independently verified with extensive tests, with both in vitro and in silico data.
In addition, we compare our results with those derived by Karp and Shamir in a recent paper. We hope that this paper clarifies certain misconceptions and explains why the model proposed in Anantharaman et al. (1997) has proven so powerful.
-
TR1998-769
1998
An Efficient Primal-Dual Interior-Point Method for Minimizing a Sum of Euclidean Norms
Anderson, K. D.;
Christiansen, E.; Conn, A. R.; Overton, M. L.
Abstract
|
PDF
Title: An Efficient Primal-Dual Interior-Point Method for Minimizing a Sum of Euclidean Norms
Author(s): Anderson, K. D.; Christiansen, E.; Conn, A. R.; Overton, M. L.
Abstract:
The problem of minimizing a sum of Euclidean norms dates from the 17th century and may be the earliest example of duality in the mathematical programming literature. This nonsmooth optimization problem arises in many different kinds of modern scientific applications. We derive a primal-dual interior-point algorithm for the problem, by applying Newton's method directly to a system of nonlinear equations characterizing primal and dual feasibility and a perturbed complementarity condition. The main work at each step consists of solving a system of linear equations (the Schur complement equations). This Schur complement matrix is not symmetric, unlike in linear programming. We incorporate a Mehrotra-type predictor-corrector scheme and present some experimental results comparing several variations of the algorithm, including, as one option, explicit symmetrization of the Schur complement with a skew corrector term. We also present results obtained from a code implemented to solve large sparse problems, using a symmetrized Schur complement. This has been applied to problems arising in plastic collapse analysis, with hundreds of thousands of variables and millions of nonzeros in the constraint matrix. The algorithm typically finds accurate solutions in less than 50 iterations and determines physically meaningful solutions previously unobtainable.
-
TR1998-762
1998
Just-in-Time Transparent Resource Management
Baratloo, A.
Abstract
|
PDF
Title: Just-in-Time Transparent Resource Management
Author(s): Baratloo, A.
Abstract:
This paper presents the design and the implementation of a resource management system for monitoring computing resources on a network and for dynamically allocating them to concurrently executing jobs. In particular, it is designed to support adaptive parallel computations---computations that benefit from addition of new machines, and can tolerate removal of machines while executing. The challenge for such a resource manager is to communicate the availability of resources to running programs even when the programs were not developed to work with external resource managers. Our main contribution is a novel mechanism addressing this issue, built on low-level features common to popular parallel programming systems.
Existing resource management systems for adaptive computations either require tight integration with the operating system (DRMS), or require an integration with a programming system that is aware of external resource managers (e.g. Condor/CARMI, MPVM, Piranha). Thus in each case, their support is limited to a single type of programming system. In contrast, our resource management system is unique in supporting several unmodified parallel programming systems. Furthermore, the system runs with user-level privilege, and thus can not compromise the security of the network.
The underlying mechanism and the overall system have been validated on a dynamically changing mix of jobs, some sequential, some PVM, some MPI, and some Calypso computations. We demonstrate the feasibility and the usefulness of our approach, thus showing how to construct a middleware resource management system to enhance the utilizations of distributed systems.
-
Ph.D. Thesis
1998
Foveation Techniques and Scheduling Issues in Thinwire Visualization
Chang, Ee-Chien
Abstract
|
PDF
Title: Foveation Techniques and Scheduling Issues in Thinwire Visualization
Candidate: Chang, Ee-Chien
Advisor(s): Yap, Chee
Abstract:
We are interested in the visualization of large images across a network. Upon request, the server sends an image across the network to the client, who in turn, presents this image to the viewer. A key observation is that, at any moment, the viewer is mainly interested in a region around his gaze point in the image. To exploit this, we let the viewer interactively indicates this point and the selected region will have higher priority in the transmission process. As a result, the displayed image is a ``space-variant'' image. A fundamental difference between this scheme and the usual progressive transmission scheme is that we place more emphasis on the visualization process. This shift in emphasis opens up new perspectives on the problem. In this thesis, we focus on this difference.
In chapter two, we formalize the operation of ``foveating an image'', study how to distribute the resolution over an image, and how to progressively refine such a space-variant image. Motivated by properties of human vision, we propose two methods for the construction of space-variant images. In chapter three, we formulate and study an abstract on-line scheduling problem which is motivated by interactions between the client and the server. In the fourth and last chapter, we describe details and issues in an implementation.
-
TR1998-771
1998
Exploiting Application Tunability for Efficient, Predictable, Parallel Resource Management
Chang, F.;
Karamcheti, V.; Kedem, Z.
Abstract
|
PDF
Title: Exploiting Application Tunability for Efficient, Predictable, Parallel Resource Management
Author(s): Chang, F.; Karamcheti, V.; Kedem, Z.
Abstract:
Parallel computing is becoming increasing central and mainstream, driven both by the widespread availability of commodity SMP and high-performance cluster platforms, as well as the growing use of parallelism in general-purpose applications such as image recognition, virtual reality, and media processing. In addition to performance requirements, the latter computations impose soft real-time constraints, necessitating em efficient, predictable parallel resource management. Unfortunately, traditional resource management approaches in both parallel and real-time systems are inadequate for meeting this objective; the parallel approaches focus primarily on improving application performance and/or system utilization at the cost of arbitrarily delaying a given application, while the real-time approaches are overly conservative sacrificing system utilization in order to meet application deadlines. In this paper, we propose a novel approach for increasing parallel system utilization while meeting application soft real-time deadlines. Our approach exploits the application tunability found in several general-purpose computations. Tunability refers to an application's ability to trade off resource requirements over time, while maintaining a desired level of output quality. In other words, a large allocation of resources in one stage of the computation's lifetime may compensate, in a parameterizable manner, for a smaller allocation in another stage. We first describe language extensions to support tunability in the Calypso programming system, a component of the MILAN metacomputing project, and evaluate their expressiveness using an image processing application. We then characterize the performance benefits of tunability, using a synthetic task system to systematically identify its benefits and shortcomings. Our results are very encouraging: application tunability is convenient to express, and can significantly improve parallel system utilization for computations with predictability requirements.
-
Ph.D. Thesis
1998
Techniques to Improve the Performance of Software-based Distributed Shared Memory Systems
Chu, Churngwei
Abstract
|
PDF
Title: Techniques to Improve the Performance of Software-based Distributed Shared Memory Systems
Candidate: Chu, Churngwei
Advisor(s): Kedem, Zvi
Abstract:
Software distributed shared memory systems are able to provide programmers with the illusion of global shared memory on networked workstations without special hardware support. This thesis identifies two problems in contemporary software distributed shared memory systems: (1) poor application programming interfaces for programmers who need to solve complicated synchronization problems and (2) inefficiencies in traditional multiple writer protocols. We propose a solution to both of these problems. One is the introduction of user-definable high level synchronization primitives to provide a better application programming interface. The other is the single-owner protocol to provide efficiency. In order to accommodate user-definable high level synchronization primitives, a variant of release consistency is also proposed.
User-definable high level synchronization primitives provide a paradigm for users to define their own synchronization primitives instead of relying on traditional low level synchronization primitives, such as barriers and locks. The single-owner protocol reduces the number of messages from O ( n 2 ) messages (the number of messages needed in the multiple-owner protocol) to Theta(n) messages when there are first n writers writing to a page and then n readers reading the page. Unlike some multiple-owner protocols, in the single-owner protocol garbage collection is performed asynchronously, and the size of a message for doing memory update is smaller in most cases.
We also evaluate the tradeoffs between the single-owner protocol and multiple-owner protocols. We have found that in most cases the single-owner protocol uses fewer messages than multiple-owner protocols, but there are some computations which may perform better with some multiple-owner protocols. In order to combine the advantages of both protocols, we propose a hybrid owner protocol which can be used to increase the efficiency in an adaptive way, with some pages managed by the single-owner protocol and some by a multiple-owner protocol.
Finally, five applications are evaluated using the single-owner protocol and a particular multiple-owner protocol called the lazy invalidate protocol. The performance of these two protocols is compared. We also demonstrate the use of user-definable high level synchronization primitives on one of the applications, and compare its performance against the same application constructed using only low-level synchronization primitives.
-
Ph.D. Thesis
1998
Deformable Object Tabula Rasa: A Zoomable User Interface System
Fox, David
Abstract
|
PDF
Title: Deformable Object Tabula Rasa: A Zoomable User Interface System
Candidate: Fox, David
Advisor(s): Perlin, Ken
Abstract:
This dissertation develops the concept of a zoomable user interface and identifies the design elements which are important to its viability as a successor to the desktop style of interface. The implementation of an example system named Tabula Rasa is described, along with the design and implementation of some sample applications for Tabula Rasa. We show how programming techniques such as delegation and multi-methods can be used to solve certain problems that arise in the implementation of Tabula Rasa, and in the implementation of Tabula Rasa applications.
Over the past thirty years the desktop or WIMP (Windows, Icons, Menus, Pointer) user interface has made the computer into a tool that allows non-specialists to get a variety of tasks done. In recent years, however, the applications available under this interface have become larger and more unwieldy, taking into themselves more and more marginally related functionality. Any inter-operability between applications must be explicitly designed in.
The Zoomable User Interface (ZUI) is a relatively new metaphor designed as a successor to the desktop interface. It is inspired by the Pad system, which is based on a zoomable surface of unlimited resolution. Just as the desktop interface has a set of essential elements, a ZUI has a set of elements each of which is vital to the whole. These include
- a zoomable imaging model,
- a persistent virtual geography for data objects,
- semantic zooming to optimize the utility of screen space,
- work-through interfaces for application objects,
- a constraint system for ensuring the consistency of the interface elements.
These basic elements combine to produce an environment that takes advantage of the user's spatial memory to create a more expansive and dynamic working environment, as well as encouraging finer grained applications that automatically inter-operate with various types of data objects and applications.
-
TR1998-763
1998
A New Solution to the Hidden Copy Problem
Goyal, D.;
Paige, R.
Abstract
|
PDF
Title: A New Solution to the Hidden Copy Problem
Author(s): Goyal, D.; Paige, R.
Abstract:
We consider the well-known problem of avoiding unnecessary costly copying that arises in languages with copy/value semantics and large aggregate structures such as arrays, sets, or files. The origins of many recent studies focusing on avoiding copies of flat arrays in functional languages may be traced back to SETL copy optimization [Schwartz 75]. The problem is hard, and progress is slow, but a successful solution is crucial to achieving a pointer-free style of programming envisioned by [Hoare 75].
We give a new solution to copy optimization that uses dynamic reference counts and lazy copying to implement updates efficiently in an imperative language with arbitrarily nested finite sets and maps (which can easily model arrays, records and other aggregate datatypes). Big step operational semantics and abstract interpretations are used to prove the soundness of the analysis and the correctness of the transformation. An efficient algorithm to implement the analysis is presented. The approach is supported by realistic empirical evidence.
Our solution anticipates the introduction of arbitrarily nested polymorphic sets and maps into JAVA. It may also provide a new efficient strategy for implementing object cloning in Java and object assigment in C++. We illustrate how our methods might improve the recent approach of [Wand and Clinger 98] to avoid copies of flat arrays in a language of first-order recursion equations.
-
TR1998-757
1998
Competitive Equilibrium
Greenwald, A.
Abstract
|
PDF
Title: Competitive Equilibrium
Author(s): Greenwald, A.
Abstract:
This report includes a modern account of welfare economics and competitive equilibrium theory. In particular, competitive, or Walrasian, equilibrium is defined. Moreover, existence, optimality, and uniqueness are demonstrated. However, no reliable mechanism for computing equilibrium prices is suggested. At this stage, the problem shifts from the realm of economics to an algorithmic problem in computer science.
-
TR1998-758
1998
Learning to Play Network Games
Greenwald, A.
Abstract
|
PDF
Title: Learning to Play Network Games
Author(s): Greenwald, A.
Abstract:
The idea of learning to play equilibrium strategies in repeated games is an active area of research in the game-theoretic community. Game theorists are primarily concerned with the equilibrium outcomes of learning algorithms in the limit: i.e., over an infinite amount of time. One of the goals of this research is to apply computer science ideology to learning theory. In particular, this thesis will consider imposing restrictions on traditional game-theoretic learning algorithms such that players learn to play approximations to equilibrium strategies in bounded amounts of time. The idea of such bounded learning algorithms is to quickly learn to exploit the obvious, while ignoring any subtleties.
The idea of bounded learning is applicable to network games, in which players learn to utilize networks during times of minimal congestion. These games are atypical as compared with traditional games described in the game-theoretic literature, since their underlying structure is not commonly understood by the players, and moreover, common knowledge of rationality is not a valid assumption. As such, this class of repeated games does not naturally lend itself to belief-based learning algorithms. Rather, this thesis will investigate learning algorithms for network games that are analyzed on the basis of performance, without requiring that players maintain prior beliefs about expected network congestion. In sum, the initial focus of this thesis is to explore an application of computer science ideology to learning algorithms in game theory; secondly, bounded game-theoretic learning will be applied to routing and congestion problems in network environments.
-
Ph.D. Thesis
1998
Metacomputing and Resource Allocation on the World Wide Web
Karaul, Mehmet
Abstract
|
PDF
Title: Metacomputing and Resource Allocation on the World Wide Web
Candidate: Karaul, Mehmet
Advisor(s): Kedem, Zvi
Abstract:
The World Wide Web is a challenging environment for distributed computing due to its sheer size and the heterogeneity and unreliability of machines and networks. Therefore, scalability, load balancing, and fault masking play an important role for Web-based systems. In this dissertation, I present novel mechanisms for resource allocation and parallel computing on the Web addressing these issues.
Large Web sites rely on a set of geographically dispersed replicated servers among which client requests should be appropriately allocated. I present a scalable decentralized design, which pushes the allocation functionality onto the clients. At its core lies a pricing strategy that provides incentives to clients to control the dispatching of requests while still allowing clients to take advantage of geographic proximity. An adaptive algorithm updates prices to deal with dynamic changes. A prototype system based on this architecture has been implemented and its functionality validated through a series of experiments.
Parallel computing on local area networks is based on a variety of mechanisms targeting the properties of this environment. However, these mechanisms do not effectively extend to wide area networks due to issues such as heterogeneity, security, and administrative boundaries. I present a prototype system which allows application programmers to write parallel programs in Java and allows Java-capable browsers to execute parallel tasks. It comprises a virtual machine model which isolates the program from the execution environment, and a runtime system realizing this machine on the Web. Load balancing and fault masking are transparently provided by the runtime system.
-
Ph.D. Thesis
1998
Free Parallel Data Mining
Li, Bin
Abstract
|
PDF
Title: Free Parallel Data Mining
Candidate: Li, Bin
Advisor(s): Shasha, Dennis
Abstract:
Data mining is the emerging field of applying statistical and artificial intelligence techniques to the problem of finding novel, useful, and non-trivial patterns from large databases. This thesis presents a framework for easily and efficiently parallelizing data mining algorithms. We propose an acyclic directed graph structure, exploration dag ( E-dag ), to characterize the computation model of data mining algorithms in classification rule mining, association rule mining, and combinatorial pattern discovery. An E-dag can be constructively formed in parallel from specifications of a data mining problem, then a parallel E-dag traversal is performed on the fly to efficiently solve the problem. The effectiveness of the E-dag framework is demonstrated in biological pattern discovery applications.
We also explore data parallelism in data mining applications. The cross-validation and the windowing techniques used in classification tree algorithms facilitate easy development of efficient data partitioning programs. In this spirit, we present a new classification tree algorithm called NyuMiner that guarantees that every split in a classification tree is optimal with respect to any given impurity function and any given maximum number of branches allowed in a split. NyuMiner can be easily parallelized using the data partitioning technique.
This thesis also presents a software architecture for running parallel data mining programs on networks of workstations (NOW) in a fault-tolerant manner. The software architecture is based on Persistent Linda (PLinda), a robust distributed parallel computing system which automatically utilize idle cycles. Templates are provided for application programmers to develop parallel data mining programs in PLinda. Parallelization frameworks and the software architecture form a synergy that makes free efficient data mining realistic.
-
Ph.D. Thesis
1998
Fast Algorithms for Discovering the Maximum Frequent Set
Lin, Dao-I
Abstract
|
PDF
Title: Fast Algorithms for Discovering the Maximum Frequent Set
Candidate: Lin, Dao-I
Advisor(s): Kedem, Zvi
Abstract:
Discovering frequent itemsets is a key problem in important data mining applications, such as the discovery of association rules, strong rules, episodes, and minimal keys. Typical algorithms for solving this problem operate in a bottom-up breadth-first search direction. The computation starts from frequent 1-itemsets (minimal length frequent itemsets) and continues until all maximal (length) frequent itemsets are found. During the execution, every frequent itemset is explicitly considered. Such algorithms perform reasonably well when all maximal frequent itemsets are short. However, performance drastically decreases when some of the maximal frequent itemsets are relatively long. We present a new algorithm which combines both the bottom-up and the top-down searches. The primary search direction is still bottom-up, but a restricted search is also conducted in the top-down direction. This search is used only for maintaining and updating a new data structure we designed, the maximum frequent candidate set. It is used to prune candidates in the bottom-up search. A very important characteristic of the algorithm is that it does not require explicite examination of every frequent itemset. Therefore the algorithm performs well even when some maximal frequent itemsets are long. As its output, the algorithm produces the maximum frequent set, i.e., the set containing all maximal frequent itemsets, thus specifying immediately all frequent itemsets. We evaluate the performance of the algorithm using well-known synthetatic benchmark databases and real-life census and stock market databases. The improvement in performance can be up to several orders of magnitude, compared to the best current algorithms.
-
Ph.D. Thesis
1998
Algorithmic Techniques in Computational Genomics
Parida, Laxmi
Abstract
|
PDF
Title: Algorithmic Techniques in Computational Genomics
Candidate: Parida, Laxmi
Advisor(s): Mishra, Bud
Abstract:
This thesis explores the application of algorithmic techniques in understanding and solving computational problems arising in Genomics (called Computational Genomics ). In the first part of the thesis we focus on the problem of reconstructing physical maps from data, related to "reading" the genome of an organism, and in the second part we focus on problems related to "interpreting" (in a very limited sense) the genome. The main contributions of the thesis are understanding the computational complexity of, and designing algorithms for some key problems in both these domains.
The primary goal of the Human Genome Project is to determine the entire three billion base pair sequence of the human genome and locate roughly 100,000 genes on the DNA. Recently, a set of single molecule methods (such as optical mapping) have been developed that allow one to create physical maps (a set of landmarks on the DNA whose locations are well defined), but can only do so by combining a population of data in the presence of errors from various sources. In the first part of the thesis, we focus on the problem of computing physical maps from data that arise in single molecule methods. We describe two combinatorial models of the problem termed Exclusive Binary Flip Cut (EBFC) and Weighted Consistency Graph (WCG) problems. We show that both the problems are MAX SNP hard and give bounds on the approximation factors achievable. We give polynomial time 0.878-approximation algorithm for the EBFC problem and 0.817-approximation algorithm for the WCG problem, using the maxcut approximation algorithm due to Goemans and Williamson. We also give a low polynomial time practical algorithm that works well on simulated and real data. Naksha is an implementation of this algorithm and a demonstration is available at http://www.cs.nyu.edu/parida/naksha.html . We also have similar results on complexity for generalizations of the problem which model various other sources of errors. We have generalized our complexity and algorithmic results to the case where there is more than one population in the data (which we call the K -populations problem). In the second part of the thesis, we focus on "interpreting" the genome. We consider the problem of discovering patterns (or motifs) in strings on a finite alphabet: we show that by appropriately defining irredundant motifs, the number of irredundant motifs is only quadratic in the input size. We use these irredundant motifs in designing algorithms to align multiple genome or protein sequences. Alignment of sequences aids in comparing similarities, in structure and function of the proteins.
-
Ph.D. Thesis
1998
Thinksheet: a Tool for Information Navigation
Piatko, Peter
Abstract
|
PDF
Title: Thinksheet: a Tool for Information Navigation
Candidate: Piatko, Peter
Advisor(s): Shasha, Dennis
Abstract:
Imagine that you are a ``knowledge worker'' in the coming millenium. You must synthesize information and make decisions such as ``Which benefits plan to use?'' ``What do the regulations say about this course of action?'' ``How does my job fit into the corporate business plan?'' or even ``How does this program work?'' If the dream of digital libraries is to bring you all material relevant to your task, you may find yourself drowning before long. Reading is harder than talking to people who know the relevant documents and can tell you what you're interested in. That is what many current knowledge workers do, giving rise to professions such as insurance consultant, lawyer, benefits specialist, and so on.
Imagine by contrast that the documents you retrieve could be tailored precisely to your needs. That is, imagine that the document might ask you questions and produce a document filtered and organized according to those you have answered.
We have been developing software that allows writers to tailor documents to the specific needs of large groups of readers. Thinksheet combines the technologies of expert systems, spreadsheets, and database query processing to provide tailoring capabilities for complex documents. The authoring model is only slighly more complex than a spreadsheet.
This thesis discusses the conceptual model and the implementation of Thinksheet, and applications for complex documents and metadata.
-
TR1998-766
1998
Steering Clear of Triples: Deriving the Control Flow Graph Directly from the Abstract Syntax Tree in C Programs
Schwartz, N.
Abstract
|
PDF
Title: Steering Clear of Triples: Deriving the Control Flow Graph Directly from the Abstract Syntax Tree in C Programs
Author(s): Schwartz, N.
Abstract:
This article explores the extension of Morgenthaler's Virtual Control Flow technique, which derives control flow semantics directly from the Abstract Syntax Tree, from the relatively coarse granularity of syntactic C expressions to the finer granularity of basic block expressions, that is, expressions without embedded control flow. We explain why this is a better level of abstraction for program analysis, and discuss the elements of an efficient and elegant solution, motivating the presentation by appealing to a more explicit intermediate form. We present our algorithm, and conclude with remarks about the suitability of Morgenthaler's version of Virtual Control Flow for customary exhaustive data-flow analysis.
-
Ph.D. Thesis
1998
Corpus-based Parsing and Sublanguage Studies
Sekine, Satoshi
Abstract
|
PDF
Title: Corpus-based Parsing and Sublanguage Studies
Candidate: Sekine, Satoshi
Advisor(s): Grishman, Ralph
Abstract:
There are two main topics in this thesis, a corpus-based parser and a study of sublanguage.
A novel approach to corpus-based parsing is proposed. In this framework, a probabilistic grammar is constructed whose rules are partial trees from a syntactically-bracketed corpus. The distinctive feature is that the partial trees are multi-layered. In other words, only a small number of non-terminals are used to cut the initial trees; other grammatical nodes are embedded into the partial trees, and hence into the grammar rules. Good parsing performance was obtained, even with small training corpora. Several techniques were developed to improve the parser's accuracy, including in particular two methods for incorporating lexical information. One method uses probabilities of binary lexical dependencies; the other directly lexicalizes the grammar rules. Because the grammar rules are long, the number of rules is huge - more than thirty thousand from a corpus of one million words. A parsing algorithm which can efficiently handle such a large grammar is described. A Japanese parser based on the same idea was also developed.
Corpus-based sublanguage studies were conducted to relate the notion of sublanguage to lexical and syntactic properties of a text. A statistical method based on word frequencies was developed to define sublanguages within a collection of documents; this method was evaluated by identifying the sublanguage of new documents. Relative frequencies of different syntactic structures were used to assess the domain dependency of syntactic structure in a multi-domain corpus. Cross-entropy measurements showed a clear distinction between fiction and non-fiction domains. Experiments were then performed in which grammars trained on individual domains, or sets of domains, were used to parse texts in the same or other domains. The results correlate with the measurements of syntactic variation across domains; in particular, the best performance is achieved using grammars trained on the same or similar domains.
The parsing and sublanguage techniques were applied to speech recognition. Sublanguage techniques were able to increase recognition accuracy, and some promising cases were found where the parser was able to correct recognition errors.
-
TR1998-773
1998
A Numerical Study of a Class of FETI Preconditioners for Mortar Finite Elements in Two Dimensions
Stefanica, D.;
Klawonn, A.
Abstract
|
PDF
Title: A Numerical Study of a Class of FETI Preconditioners for Mortar Finite Elements in Two Dimensions
Author(s): Stefanica, D.; Klawonn, A.
Abstract:
The FETI method is an iterative substructuring method using Lagrange multipliers. It is actively used in industrial--size parallel codes for solving difficult computational mechanics problems, for example the system ANSYS. Mortar finite elements are nonconforming finite elements that also allow for a geometrically nonconforming decomposition of the computational domain and for the optimal coupling of different variational approximations in different subdomains. We present a numerical study of three different FETI preconditioners for two dimensional, self-adjoint, elliptic equations discretized by mortar finite elements.
-
TR1998-767
1998
On the L(2) Stability of the 1-D Mortar Projection
Stefanica, D.
Abstract
|
PDF
Title: On the L(2) Stability of the 1-D Mortar Projection
Author(s): Stefanica, D.
Abstract:
It is previously known that the one dimensional mortar finite element projection is stable in the $L^2$ norm, provided that the ratio of any two neighboring mesh intervals is uniformly bounded, but with the constant in the bound depending on the maximum value of that ratio. In this paper, we show that this projection is stable in the $L^2$ norm, independently of the properties of the nonmortar mesh. The 1D trace of the mortar space considered here is a piecewise polynomial space of arbitrary degree; therefore, our result can be used for both the $h$ and the $hp$ version of the mortar finite element.
-
TR1998-774
1998
Poincare and Friedrichs Inequalities For Mortar Finite Element Methods
Stefanica, D.
Abstract
|
PDF
Title: Poincare and Friedrichs Inequalities For Mortar Finite Element Methods
Author(s): Stefanica, D.
Abstract:
Mortar finite elements are nonconforming finite elements that allow for a geometrically nonconforming decomposition of the computational domain and, at the same time, for the optimal coupling of different variational approximations in different subregions. Poincare and Friedrichs inequalities for mortar finite elements are derived. Using these inequalities, it is shown that the condition number for self-adjoint elliptic problems discretized using mortars is comparable to that of the conforming finite element case. Geometrically non-conforming mortars of the second generation are considered, i.e. no continuity conditions are imposed at the vertices of the subregions.
-
TR1998-768
1998
An Iterative Substructuring Method for Maxwell's Equations in Two Dimensions
Toselli, A.;
Widlund, O. B.; Wohlmuth, B. I.
Abstract
|
PDF
Title: An Iterative Substructuring Method for Maxwell's Equations in Two Dimensions
Author(s): Toselli, A.; Widlund, O. B.; Wohlmuth, B. I.
Abstract:
Iterative substructuring methods, also known as Schur complement methods, form an important family of domain decomposition algorithms. They are preconditioned conjugate gradient methods where solvers on local subregions and a solver on a coarse mesh are used to construct the preconditioner. For conforming finite element approximations of $H^1$, it is known that the number of conjugate gradient steps required to reduce the residual norm by a fixed factor is independent of the number of substructures and that it grows only as the logarithm of the dimension of the local problem associated with an individual substructure. In this paper, the same result is established for similar iterative methods for low--order N{\'e}d{\'e}lec finite elements, which approximate $\Hcurl$ in two dimensions. Results of numerical experiments are also provided.
-
TR1998-765
1998
Some Results on Overlapping Schwarz Methods for the Helmholtz Equation Employing Perfectly Matched Layers
Toselli, A.
Abstract
|
PDF
Title: Some Results on Overlapping Schwarz Methods for the Helmholtz Equation Employing Perfectly Matched Layers
Author(s): Toselli, A.
Abstract:
In this paper, we build a class of overlapping Schwarz preconditioners for a finite element approximation of the Helmholtz equation in two dimensions. Perfectly Matched Layers are employed to build the local problems and two kinds of boundary conditions are employed to match the local solutions. Numerical results are presented to compare the different preconditioners.
-
Ph.D. Thesis
1998
Abstract Models of Distributed Memory Management
Ungureanu, Cristian
Abstract
|
PDF
Title: Abstract Models of Distributed Memory Management
Candidate: Ungureanu, Cristian
Advisor(s): Goldberg, Benjamin
Abstract:
In this dissertation, we are presenting a model suitable for reasoning about memory management in concurrent and distributed systems. The model provides a suitable level of abstraction: it is low-level enough so that we can express communication, allocation and garbage collection, but otherwise hides many of the lower-level details of an actual implementation. Using it, we can give compact, and provably correct, characterizations of garbage collection algorithms in distributed systems.
The models are rewriting systems whose terms are programs in which the ``code'' and the ``store'' are syntactically apparent. Evaluation is expressed as conditional rewriting and includes store and communication operations. Using techniques developed for communicating and concurrent systems we give a semantics suitable for proving equivalence of such programs. Garbage collection becomes a rewriting relation that removes part of the store without affecting the behavior of the program.
We introduce and prove correct a very general garbage collection rule based on reachability; any actual implementation which is capable of providing the transitions (including their atomicity constraints) specified by the strategy is therefore correct. We give examples of such specific implementations, and show how their correctness follows from the correctness of the general relation.
-
TR1998-775
1998
An Iterative Substructuring Method for Raviart-Thomas Vector Fields in Three Dimensions
Wohlmuth, B. I.;
Toselli, A.; Widlund, O. B.
Abstract
|
PDF
Title: An Iterative Substructuring Method for Raviart-Thomas Vector Fields in Three Dimensions
Author(s): Wohlmuth, B. I.; Toselli, A.; Widlund, O. B.
Abstract:
The iterative substructuring methods, also known as Schur complement methods, form one of two important families of domain decomposition algorithms. They are based on a partitioning of a given region, on which the partial differential equation is defined, into non-overlapping substructures. The preconditioners of these conjugate gradient methods are then defined in terms of local problems defined on individual substructures and pairs of substructures, and, in addition, a global problem of low dimension. An iterative method of this kind is introduced for the lowest order Raviart-Thomas finite elements in three dimensions and it is shown that the condition number of the relevant operator is independent of the number of substructures and grows only as the square of the logarithm of the number of unknowns associated with an individual substructure. The theoretical bounds are confirmed by a series of numerical experiments.
-
TR1998-761
1998
Finding Idle Work Periods on Networks of Workstations
Wyckoff, P.;
Jeong, K.; Johnson, T.
Abstract
|
PDF
Title: Finding Idle Work Periods on Networks of Workstations
Author(s): Wyckoff, P.; Jeong, K.; Johnson, T.
Abstract:
We present a simple technique for predicting the probability that an idle workstation will continue to be idle for $i$ minutes, given that it has been idle for $x$ minutes (i.e., find the {\em remaining idle period probability} $P(i;x)$). By idle we mean that the workstation owner is not interactively using the workstation or executing other tasks on it. The results are particularly applicable to the scheduling of tasks in systems that harvest cycles from idle-only workstations. Our Remaining Idle Period Probability Predictor (RIPPP) uses the distribution of the lengths of idle periods on the managed workstations. Collecting, storing, and processing these distributions (in the form of histograms) is a small overhead on modern workstations (a few kilobytes of storage per workstation).
We investigated the behavior of our RIPPP with usage traces of 31 workstations collected over a five month period, and discovered the following six results. (1) The distribution of one month of idle periods predicts the remaining idle period probability in the next month for most workstations. (2) Different workstations tend to have significantly different idle period length distributions. (3) The average length of an idle period does not necessarily correlate well with the probability of being able to find long idle periods, contrary to intuition and previous scheduling heuristics. (4) A workstation that has been idle a long time does not necessarily have a high probability of remaining idle for a long time. (5) Using the time of day can improve predictions. (6) The length of the previous and the current idle periods are positively correlated, but the length of the previous idle period is not strongly correlated with finding long remaining idle periods.
Based on these studies, we conclude that an effective way to find idle workstations is to collect their idle period length distribution and use it to compute $P(i;x)$. We believe our analysis will be applicable to predicting the length of busy periods, which is useful for deciding whether to migrate or suspend tasks when a workstation becomes busy (the owner reclaims it).
From our results, we have developed a remaining idle period probability toolkit which includes a statistics collector and a prediction library in C. This will be available from our project homepage.
-
Ph.D. Thesis
1998
Fault-tolerant parallel computing on networks of non-dedicated workstations
Wyckoff, Peter
Abstract
|
PDF
Title: Fault-tolerant parallel computing on networks of non-dedicated workstations
Candidate: Wyckoff, Peter
Abstract:
This thesis addresses fault tolerance issues in parallel computing on loosely-coupled networks of non-dedicated, heterogeneous workstations. The efficiency of fault tolerance mechanisms is dictated by network and failure characteristics. Traditional approaches to fault tolerance are efficient when network and failure characteristics are identical across workstations, such as in a local area network of homogeneous workstations; however, a loosely coupled network of non-dedicated workstations has non-uniform network and failure characteristics. This thesis presents the design and implementation of a flexible fault tolerance runtime system that allows each process in a parallel application to use one of three rollback recovery mechanisms. Rollback recovery is achieved using a lightweight form of transaction, which performance results show incurs almost no overhead. The system is built on top of the Linda coordination language and runs on Alpha, Linux, Solaris and SGI workstations and Java-enabled browsers. For barrier synchronous parallel applications, a new equi-distant checkpointing interval selection method, the expected maximum heuristic, is presented. The method is applicable to any rollback recovery system in which processes recover from failure independently and communicate through a reliable third party. Simulation results show that the expected maximum heuristic has near optimal performance under a variety of different failure rates and barrier lengths.
-
TR1997-735
1997
Iterative Substructuring Preconditioners for Mortar Element Methods in Two Dimensions
Achdou, Y.;
Maday, Y.; Widlund, O. B.
Abstract
|
PDF
Title: Iterative Substructuring Preconditioners for Mortar Element Methods in Two Dimensions
Author(s): Achdou, Y.; Maday, Y.; Widlund, O. B.
Abstract:
The mortar methods are based on domain decomposition and they allow for the coupling of different variational approximations in different subdomains. The resulting methods are nonconforming but still yield optimal approximations. In this paper, we will discuss iterative substructuring algorithms for the algebraic systems arising from the discretization of symmetric, second order, elliptic equations in two dimensions. Both spectral and finite element methods, for geometrically conforming as well as nonconforming domain decompositions, are studied. In each case, we obtain a polylogarithmic bound on the condition number of the preconditioned matrix.
-
TR1997-734
1997
SDPPACK User's Guide -- Version 0.8 Beta
Alizadeh, F.;
Haeberly, J.; Nayakkankuppam, M.V.; Overton, M.L.
Abstract
|
PDF
Title: SDPPACK User's Guide -- Version 0.8 Beta
Author(s): Alizadeh, F.; Haeberly, J.; Nayakkankuppam, M.V.; Overton, M.L.
Abstract:
-
TR1997-737
1997
SDPPACK User's Guide -- Version 0.9 Beta for Matlab 5.0
Alizadeh, F.;
Haeberly, J. A.; Nayakkankuppa, M. V.; Overton, M.L.; Schmieta, S.
Abstract
|
PDF
Title: SDPPACK User's Guide -- Version 0.9 Beta for Matlab 5.0
Author(s): Alizadeh, F.; Haeberly, J. A.; Nayakkankuppa, M. V.; Overton, M.L.; Schmieta, S.
Abstract:
This report describes SDPpack Version 0.9 Beta for Matlab 5.0. This version extends the previous release for semidefinite programming (SDP) to mixed semidefinite--quadratic--linear programs (SQLP), i.e.\ linear optimization problems over a product of semidefinite cones, quadratic cones and the nonnegative orthant. Together, these cones make up all possible homogeneous self-dual cones over the reals.
The main routine implements a primal--dual Mehrotra predictor--corrector scheme based on the XZ+ZX search direction for SDP. More specialized routines are also available, one to solve SDP's with diagonal constraints only, and one to compute the Lov\'asz $\theta$ function of a graph, both using the XZ search direction. Routines are also provided to determine whether an SQLP is primal or dual degenerate at its solution and whether strict complementarity holds there. Primal nondegeneracy is associated with dual uniqueness and dual nondegeneracy with primal uniqueness, though these conditions are not equivalent if strict complementarity fails to hold.
A routine is also provided to compute the condition number of an SQLP. The Matlab code calls mex files for improved performance; binaries are available for several platforms. Benchmarks show that the codes provide highly accurate solutions to a wide variety of problems.
-
Ph.D. Thesis
1997
Multiscale Snakes: Resolution-Appropriate Shape Descriptions
Baldwin, Bernard
Abstract
|
PDF
Title: Multiscale Snakes: Resolution-Appropriate Shape Descriptions
Candidate: Baldwin, Bernard
Advisor(s): Geiger, Davi
Abstract:
We present a new type of "snake" in which the dimensionality of the shapes is scaled appropriately for the resolution of the images in which the shapes are embedded. We define shapes as an ordered list of control points and compute the principal components of the shapes in a prior training set. Our energy function is based upon the Mahalanobis distance of a given shape from the mean shape and on the Mahalanobis distance of the image attributes from image attribute values extracted from the training set. We show that the derivative of this energy function with respect to the modal weights is reduced as the image resolution is reduced, and that the derivative of the energy scales with the variance associated with each mode. We exploit this property to determine the subset of the modes which are relevant at a particular level of image resolution, thereby reducing the dimensionality of the shapes. We implement a coarse-to-fine search procedure in the image and shape domains simultaneously, and demonstrate this procedure on the identification of anatomic structures in Computed Tomography images and on the identification of military vehicles in range images.
-
TR1997-748
1997
The coupling of mixed and conforming finite element discretizations
Baratloo, A.;
Karaul, M.; Karl, H.; Kedem, Z. M.
Abstract
|
PDF
Title: The coupling of mixed and conforming finite element discretizations
Author(s): Baratloo, A.; Karaul, M.; Karl, H.; Kedem, Z. M.
Abstract:
While Java and applets have created a new perspective for Web applications, some problems are still unsolved. Among these are the question of how Java applets can find other members of the collaboration session, how to deal with the restrictions imposed by the Java security model, and how to overcome the inability of applets to communicate directly, even if they belong to the same distributed application. KnittingFactory addresses the problem of finding other members of a collaboration session by providing a distributed registry system where the search is performed within a Web browser without violating its security model; the problem of arbitrary placement of applications by providing the core functionality for downloading applets from an arbitrary node; and finally the problem of direct applet-applet communication by using the Java Remote Method Invocation mechanisms to give applets information on how their fellow applets can be reached. Two example applications validate this concept and demonstrate the ease of use of KnittingFactory.
-
TR1997-743
1997
Iterative Substructuring Algorithms for the P-Version Finite Element Method for Elliptic Problems
Bica, I.
Abstract
|
PDF
Title: Iterative Substructuring Algorithms for the P-Version Finite Element Method for Elliptic Problems
Author(s): Bica, I.
Abstract:
In this thesis, we study iterative substructuring methods for linear elliptic problems approximated by the $p$-version finite element method. They form a class of nonoverlapping domain decomposition methods, for which the information exchange between neighboring subdomains is limited to the variables directly associated with the interface, i.e. those common to more than one subregion. Our objective is to design algorithms in $3D$ for which we can find an upper bound for the {\it condition number} $\kappa$ of the preconditioned linear system, which is independent of the number of subdomains and grows slowly with $p$.
Iterative substructuring methods for the $h-$version finite element, and spectral elements have previously been developed and analysed by several authors. However, some very real difficulties remained when the extension of these methods and their analysis to the $p-$version finite element method were attempted, such as a lack extension theorems for polynomials. The corresponding results are well known for Sobolev spaces, but their extension to finite element spaces is quite intricate. In our technical work, we use and further develop extension theorems for polynomials in order to prove bounds on the condition numbers of several algorithms.
We have also made many numerical tests. We can use our programs for several purposes. Not only can we compute the condition numbers and study the rate of convergence for a variety of the algorithms that we have developed, but we can also compute the bounds on these condition numbers, as given by the theory. This is useful because the theory predicts the order of magnitude actual condition numbers.
-
TR1997-733
1997
On the Singular Limit of the Quantum-Classical Molecular Dynamics Model
Bornemann, F. A.;
Schuette, C.
Abstract
|
PDF
Title: On the Singular Limit of the Quantum-Classical Molecular Dynamics Model
Author(s): Bornemann, F. A.; Schuette, C.
Abstract:
In molecular dynamics applications there is a growing interest in so-called mixed quantum-classical models. These models describe most atoms of the molecular system by the means of classical mechanics but an important, small portion of the system by the means of quantum mechanics. A particularly extensively used model, the QCMD model, consists of a singularly perturbed Schrodinger equation nonlinearly coupled to a classical Newtonian equation of motion.
This paper studies the singular limit of the QCMD model for finite dimensional Hilbert spaces. The main result states that this limit is given by the time-dependent Born-Oppenheimer model of quantum theory ---provided the Hamiltonian under consideration has a smooth spectral decomposition. This result is strongly related to the quantum adiabatic theorem. The proof uses the method of weak convergence by directly discussing the density matrix instead of the wave functions. This technique avoids the discussion of highly oscillatory phases.
On the other hand, the limit of the QCMD model is of a different nature if the spectral decomposition of the Hamiltonian happens not to be smooth. We will present a generic example for which the limit set is not a unique trajectory of a limit dynamical system but rather a funnel consisting of infinitely many trajectories.
-
TR1997-753
1997
Overlapping Schwarz Algorithms for Solving Helmholtz's Equation
Cai, X.;
Casarin, M. A., Jr.; Elliot, F. W., Jr.; Widlund, O. B.
Abstract
|
PDF
Title: Overlapping Schwarz Algorithms for Solving Helmholtz's Equation
Author(s): Cai, X.; Casarin, M. A., Jr.; Elliot, F. W., Jr.; Widlund, O. B.
Abstract:
In this paper, prepared for the proceedings of the international conference on domain decomposition held in Boulder, CO in August 1997, we give a progress report on the development of a new family of domain decomposition methods for the solution of Helmholtz's equation.
We present three algorithms based on overlapping Schwarz methods; in our favorite method we proceed to the continuous finite element approximation of the Helmholtz's equation through a sequence of discontinuous iterates. While this is, quite possibly, a new type of overlapping Schwarz methods, we have been inspired to develop this idea by the thesis of Bruno Despr\'{e}s.
-
TR1997-745
1997
Smile consistency - A Memory Consistency Model with User Definable High Level Synchronization Primitives
Chu, C.;
Piatko, P.
Abstract
|
PDF
Title: Smile consistency - A Memory Consistency Model with User Definable High Level Synchronization Primitives
Author(s): Chu, C.; Piatko, P.
Abstract:
We propose a new natural memory consistency model, Smile consistency. Not only does Smile provide an intuitive memory consistency model but also a paradigm in which users can define their own synchronization primitives, called synchronization classes. Programmers can use the synchronization class to ease the programming work related to basic synchronization operations. Therefore, in addition to shared memory, threads can also communicate with each other via synchronization objects, instances of synchronization classes. Programs with high-level synchronization objects may also outperform those with only basic synchronization primitives.
-
TR1997-754
1997
Order of Magnitude Comparisons of Distance
Davis, E.
Abstract
|
PDF
Title: Order of Magnitude Comparisons of Distance
Author(s): Davis, E.
Abstract:
Order of magnitude reasoning --- reasoning by rough comparisons of the sizes of quantities --- is often called "back of the envelope calculation", with the implication that the calculations are quick though approximate. This paper exhibits an interesting class of constraint sets in which order of magnitude reasoning is demonstrably much faster than ordinary quantitative reasoning. Specifically, we present a polynomial-time algorithm that can solve a set of constraints of the form ``Points a and b are much closer together than points c and d.'' We prove that this algorithm can be applied if ``much closer together'' is interpreted either as referring to an infinite difference in scale or as referring to a finite difference in scale, as long as the difference in scale is greater than the number of variables in the constraint set. We also prove the first-order theory over such constraints is decidable.
-
TR1997-738
1997
The Naive Physics Perplex
Davis, E.
Abstract
|
PDF
Title: The Naive Physics Perplex
Author(s): Davis, E.
Abstract:
The ``Naive Physics Manifesto'' of Pat Hayes [1978] proposes a large-scale project of developing a formal theory encompassing the entire knowledge of physics of naive reasoners, expressed in a declarative symbolic form. The theory is organized in clusters of closely interconnected concepts and axioms. More recent work in the representation of commonsense physical knowledge has followed a somewhat different methodology. The goal has been to develop a competence theory powerful enough to justify commonsense physical inferences, and the research is organized in microworlds, each microworld covering a small range of physical phenomena. In this paper we compare the advantages and disadvantages of the two approaches. We also discuss some difficult key issues in automating commonsense physical reasoning.
-
TR1997-732
1997
The On-Line K-Server Problem
Floratos, A.
Abstract
|
PDF
Title: The On-Line K-Server Problem
Author(s): Floratos, A.
Abstract:
We survey the research performed during the last few years on the on-line $k$-server problem over metric spaces. A variety of algorithms are presented \mbox{--- both} deterministic and \mbox{randomized ---} and their performance is studied in the framework of competitive analysis. Restrictions of the problem to special cases of metric spaces are also considered.
-
TR1997-746
1997
Overlapping Schwarz Methods for Vector Valued Elliptic Problems in Three Dimensions
Hiptmair, R.;
Toselli, A.
Abstract
|
PDF
Title: Overlapping Schwarz Methods for Vector Valued Elliptic Problems in Three Dimensions
Author(s): Hiptmair, R.; Toselli, A.
Abstract:
This paper is intended as a survey of current results on algorithmic and theoretical aspects of overlapping Schwarz methods for discrete $\Hcurl$ and $\Hdiv$--elliptic problems set in suitable finite element spaces. The emphasis is on a unified framework for the motivation and theoretical study of the various approaches developed in recent years.
Generalized Helmholtz decompositions -- orthogonal decompositions into the null space of the relevant differential operator and its complement -- are crucial in our considerations. It turns out that the decompositions the Schwarz methods are based upon have to be designed separately for both components. In the case of the null space, the construction has to rely on liftings into spaces of discrete potentials.
Taking the cue from well-known Schwarz schemes for second order elliptic problems, we devise uniformly stable splittings of both parts of the Helmholtz decomposition. They immediately give rise to powerful preconditioners and iterative solvers.
-
TR1997-751
1997
Adaptive Mixed Hybrid and Macro-Hybrid Finite Element Methods
Hoppe, R. H. W.;
Wohlmuth, B.
Abstract
|
PDF
Title: Adaptive Mixed Hybrid and Macro-Hybrid Finite Element Methods
Author(s): Hoppe, R. H. W.; Wohlmuth, B.
Abstract:
In this paper, we consider efficient multilevel based iterative solvers and efficient and reliable a posteriori error estimators for mixed hybrid and macro-hybrid finite element discretizations of elliptic boundary value problems. We give an overview concerning the state-of-the-art techniques for these nonconforming approaches and illustrate the performance of the adaptivity concepts realized by some selected numerical examples.
-
TR1997-752
1997
WebSeal: Web Server Allocation
Karaul, M. H.;
Korilis, Y. A.; Orda, A.
Abstract
|
PDF
Title: WebSeal: Web Server Allocation
Author(s): Karaul, M. H.; Korilis, Y. A.; Orda, A.
Abstract:
With the rapid growth of the World Wide Web, clients attempting to access some popular web sites are experiencing slow response times due to server load and network congestion. Replacing the single server machine with a set of replicated servers is a cost-effective solution to partition server load which also allows incremental scalability and fault transparency. Distributing these replicated servers geographically can reduce network congestion and increase availability. However, distributed web sites are faced with the issue of allocating servers: how do clients find out about the replicas and how do they decide which one to contact? Popular web sites have well publicized server names and require a transparent mapping of the public server name to replicated servers.
Unlike most traditional approaches, we propose a technique which pushes the server allocation functionality onto the client. We argue that this approach scales well and results in increased performance in many cases. Building on theoretical work based on game theory, we show that the usage of individual replicas can be effectively controlled with cost functions even when the clients are noncooperative. We present the design and implementation of WebSeAl, our prototype system realizing these techniques. WebSeAl does not require any changes to existing client and server code, conforms to all standards, and does not generate any control messages. Preliminary experiments utilizing servers on six continents and in controlled settings indicate that WebSeal improves performance significantly while imposing little overhead.
-
TR1997-742
1997
Pincer-Search: A New Algorithm for Discovering the Maximum Frequent Set
Lin, D-I.;
Kedem, Z.
Abstract
|
PDF
Title: Pincer-Search: A New Algorithm for Discovering the Maximum Frequent Set
Author(s): Lin, D-I.; Kedem, Z.
Abstract:
Discovering frequent itemsets is a key problem in important data mining applications, such as the discovery of association rules, strong rules, episodes, and minimal keys. Typical algorithms for solving this problem operate in a bottom-up breadth-first search direction. The computation starts from frequent 1-itemsets (minimal length frequent itemsets) and continues until all maximal (length) frequent itemsets are found. During the execution, every frequent itemset is explicitly considered. Such algorithms perform reasonably well when all maximal frequent itemsets are short. However, performance drastically decreases when some of the maximal frequent itemsets are relatively long. We present a new algorithm which combines both the bottom-up and top-down directions. The main search direction is still bottom-up but a restricted search is conducted in the top-down direction. This search is used only for maintaining and updating a new data structure we designed, the maximum frequent candidate set. It is used to prune candidates in the bottom-up search. As a very important characteristic of the algorithm, it is not necessary to explicitly examine every frequent itemset. Therefore it performs well even when some maximal frequent itemsets are long. As its output, the algorithm produces the maximum frequent set, i.e., the set containing all maximal frequent itemsets, which therefore specifies immediately all frequent itemsets. We evaluate the performance of the algorithm using a well-known benchmark database. The improvements can be up to several orders of magnitude, compared to the best current algorithms.
-
Ph.D. Thesis
1997
Deformable Object Recognition with Articulations and Occlusions
Liu, Tyng-Luh
Abstract
|
PDF
Title: Deformable Object Recognition with Articulations and Occlusions
Candidate: Liu, Tyng-Luh
Advisor(s): Geiger, Davi
Abstract:
The subject of this thesis is deformable object recognition. We concentrate on issues of articulations and of occlusions.
In order to find a target object (undergoing articulations) in an image we use the following procedures: (i) extracting key features in an image, (ii) detecting key points in the model, (iii) efficiently searching through possible image segmentations and (iv) comparing and grouping shapes. Together, they reconstruct the target object in the image. A Bayesian rational is presented to justify this strategy.
Our main focuses in this thesis are on (iii) and (iv). More precisely, we are interested in shape representation, shape similarity and combining shape similarity with image segmentation.
We consider two possible shape representations for an object. The first is given by its shape contour (SC), or silhouette, and the other is described by the structure of symmetry axis (SA), or skeleton, which has a unique free tree structure. For shape similarity, we review a string matching method based on the SC representation and then, we develop a tree matching scheme using the SA-tree representation. The advantage of this approach is that it becomes extremely simple to account for articulations and occlusions. As a novelty, the SA is obtained via a shape comparison between an SC and its mirror version. Finally we study how to integrate the shape module, for both shape representations (SC and SA), with an active contour tracker to yield an image segmentation.
Our efforts through all these issues have been to provide methods that are guaranteed to find optimal solutions.
We also address the topic of occluded object recognition but from a different viewpoint. Our method is to treat it as a function approximation problem with an over-complete basis (a library of image templates), but also accounts for occlusions, where the basis superposition principle is no longer valid. Since the basis is over-complete, there are infinitely many ways to decompose the image. We are motivated to select a sparse/compact representation of the image and to account for occlusions and noise.
-
Ph.D. Thesis
1997
Partial evaluation of concurrent programs
Marinescu, Mihnea
Abstract
|
PDF
Title: Partial evaluation of concurrent programs
Candidate: Marinescu, Mihnea
Advisor(s): Goldberg, Benjamin
Abstract:
The goal of this dissertation is to develop partial evaluation (program specialization) techniques specific to concurrent programs .
The language chosen for this investigation is a very simple CSP-like language. A standard binding-time analysis for imperative languages is conservatively extended in order to deal with the basic concurrent constructs: synchronous communication and nondeterministic choice. Based on the resulting binding-time annotations, a specialization transformation is formally defined using a labeled transition system with actions. The correctness of the partial evaluation is stated and a proof is included. This result is closely related to (strong) bisimulation , the equivalence relation on transition systems. We name the two directions of the bisimulation equivalence soundness and completeness respectively.
In order to maintain a clear presentation, this simple specialization algorithm addresses only the data transfer component of the communication; a post-specialization analysis for the detection and removal of redundant synchronizations (i.e. synchronizations whose removal does not increase the nondeterminism of a program) is presented separately. This redundant-synchronization analysis is based on the characterization of dependencies in a CSP-like language.
Several pragmatic issues such as improving the binding-time analysis, controlling loop unrolling and the consequences of lifting nondeterminism from run-time to specialization-time are discussed. Two additional binding-time analyses are presented. We call one of them speculative because the specialization transformation based on it is sound but not complete. We call the other one extended because it includes an on-line redundant-synchronization analysis.
The relationship between partial evaluation and different types of fairness is also studied. In order to deal with a wide range of fair run-time systems, ranging from strong to weak, and from process-fair to channel-fair and communication-fair, we use a general operational framework for specifying fairness properties as systematic means of reducing nondeterminism. We then prove the correctness (as bisimulation equivalence) or just the soundness of specialization transformations under various binding-time analyses.
Throughout the dissertation, the power of the newly developed techniques is shown in several examples.
-
M.S. Thesis
1997
Real/Expr: Implementation of an Exact Computation Package
Ouchi, Kouji
Abstract
|
PDF
Title: Real/Expr: Implementation of an Exact Computation Package
Candidate: Ouchi, Kouji
Advisor(s): Yap, Chee
Abstract:
The Real/Expr package is a C++ project to support the precision-driven approach to exact computation of geometric algorithms. The package is built on top of the class Real that encompasses a variety of numerical representations. The class Expr captures a set of algebraic expressions on which any comparison can be done precisely.
The software libraries described here are available via the Web page http://simulation.nyu.edu/projects/exact/.
-
TR1997-739
1997
A Uniform Framework for Ordered Restriction Map Problems
Parida, L.
Abstract
|
PDF
Title: A Uniform Framework for Ordered Restriction Map Problems
Author(s): Parida, L.
Abstract:
Optical Mapping is an emerging technology for constructing ordered restriction maps of DNA molecules. The underlying computational problems for this technology have been studied and several cost functions have been proposed in recent literature. Most of these propose combinatorial models; one of them also presents a probabilistic approach. However, it is not {\em a priori} clear as to how these cost functions relate to one another and to the underlying problem. We present a uniform framework for the restriction map problems where each of these various models is a specific instance of the basic framework. We achieve this by identifying the following approaches to the ordered restriction map problem: (1) using data consensus or agreement, and, (2) optimizing a characteristic function of the data. Our framework also opens up the possibility of exploring other cost functions. An additional feature is that we not only integrate the combinatorial models but also analyze the probabilistic model within the same framework. %Finally, for completeness, we include i brief survey of %the best known complexity results for these problems. Finally, we indicate the open problems by including a survey of the best known complexity results for these problems.
-
TR1997-740
1997
Inapproximability of Flip-Cut, Shift-Cut and Other problems from Optical Mapping
Parida, L.
Abstract
|
PDF
Title: Inapproximability of Flip-Cut, Shift-Cut and Other problems from Optical Mapping
Author(s): Parida, L.
Abstract:
Optical Mapping is an emerging technology for constructing ordered restriction maps of DNA molecules. The study of the complexity of the problems arising in Optical Mapping has generated considerable interest amongst computer science researchers. In this paper we examine the complexity of these problems.
Optical Mapping leads to various computational problems such as the Binary Flip Cut (BFC) problem, the Weighted Flip Cut (WFC) problem the Exclusive Binary Flip Cut (EBFC) problem \cite{parida1, parida2}, the Binary Shift Cut (BSC) problem, the Binary Partition Cut (BPC) problem and others. The complexity and the hardness of the BFC problem, the WFC problem were not known. Using the technique of {\em gap-preserving} reduction of the max-cut problem, we show that BFC and WFC problems are MAX SNP-hard and achieving an approximation ratio $1-\Upsilon/7$ for these problems is NP-hard, where $\Upsilon$ denotes the upper bound on the polynomial time approximation factor of the well-known max cut problem. A slight variation of BFC, BFC$_{\max K}$, had been shown to be NP-hard; we improve the result to show that BFC$_{\max K}$ is MAX SNP-hard and achieving an approximation ratio $(1-\Upsilon/7)\frac{p_{max}}{p_{min}}$ for BFC$_{\max K}$ is NP-hard, where $p_{\min}$ and $p_{\max}$ are the minimum and maximum of the digestion rates in the given problem. The EBFC problem was shown to be NP-Complete; improve this result to show that EBFC is MAX SNP-hard and achieving an approximation ratio $1-\Upsilon/7$ for EBFC is NP-hard. However, a dense instance of the EBFC problem does have a PTAS.
The Binary Partition Cut (modeling spurious molecules) problem has been shown to be NP-Complete: we show, in this paper, that a (reasonable) unrestrained version of it has an efficient polynomial time algorithm. A variation of the Binary Shift Cut (modeling missing fragments) BSC$_{\max K}$, had been shown to be NP-hard \cite{Tom}; we show both the versions of this problem to be MAX SNP-hard and achieving an approximation ratio $1-\Upsilon/6$ for BSC and a ratio $(1-\Upsilon/6)\frac{p_{max}}{p_{min}}$ for BSC$_{\max K}$ is NP-hard. In addition, we show that $d$-wise Match ($d$M) problem is MAX SNP-hard and achieving an approximation ratio $1-\Upsilon$ is NP-hard.
-
TR1997-741
1997
Junctions: Detection, Classification and Reconstruction
Parida, L.;
Geiger, D.; Hummel, R.
Abstract
|
PDF
Title: Junctions: Detection, Classification and Reconstruction
Author(s): Parida, L.; Geiger, D.; Hummel, R.
Abstract:
Junctions are important features for image analysis and form a critical aspect of image understanding tasks such as object recognition. We present a unified approach to detecting (location of the center of the junction), classifying (by the number of wedges -- lines, corners, $3$-junctions such as $T$ or $Y$ junctions, or $4$-junctions such as $X$-junctions) and reconstructing junctions (in terms of radius size, the angles of each wedge and the intensity in each of the wedges) in images. Our main contribution is a modeling of the junction which is complex enough to handle all these issues and yet simple enough to admit an effective dynamic programming solution. Broadly, we use a template deformation framework along with a gradient criterium to detect radial partitions of the template. We use the Minimum Description Length (MDL) principle to obtain the optimal number of partitions that best describes the junction.
Kona is an implementation of this model. We (quantitatively) demonstrate the stability and robustness of the detector by analyzing its behavior in the presence of noise, using synthetic/controlled apparatus. We also present a qualitative study of its behavior on real images.
-
TR1997-744
1997
Iterative Substructuring Methods for Spectral Element Discretizations of Elliptic Systems. I: Compressible Linear Elasticity
Pavarino, L. F.;
Widlund, O. B.
Abstract
|
PDF
Title: Iterative Substructuring Methods for Spectral Element Discretizations of Elliptic Systems. I: Compressible Linear Elasticity
Author(s): Pavarino, L. F.; Widlund, O. B.
Abstract:
An iterative substructuring method for the system of linear elasticity in three dimensions is introduced and analyzed. The pure displacement formulation for compressible materials is discretized with the spectral element method. The resulting stiffness matrix is symmetric and positive definite.
The method proposed provides a domain decomposition preconditioner constructed from local solvers for the interior of each element, and for each face of the elements and a coarse, global solver related to the wire basket of the elements. As in the scalar case, the condition number of the preconditioned operator is independent of the number of spectral elements and grows as the square of the logarithm of the spectral degree.
-
TR1997-755
1997
Iterative Substructuring Methods for Spectral Element Discretizations of Elliptic Systems. II: Mixed Methods for Linear Elasticity and Stokes Flow
Pavarino, L. F.;
Widlund, O. B.
Abstract
|
PDF
Title: Iterative Substructuring Methods for Spectral Element Discretizations of Elliptic Systems. II: Mixed Methods for Linear Elasticity and Stokes Flow
Author(s): Pavarino, L. F.; Widlund, O. B.
Abstract:
Iterative substructuring methods are introduced and analyzed for saddle point problems with a penalty term. Two examples of saddle point problems are considered: the mixed formulation of the linear elasticity system and the generalized Stokes system in three dimensions. These problems are discretized with mixed spectral element methods. The resulting stiffness matrices are symmetric and indefinite. The unknowns interior to each element are first implicitly eliminated by using exact local solvers. The resulting saddle point Schur complement is solved with a Krylov space method with block preconditioners. The velocity block can be approximated by a domain decomposition method, e.g., of wire basket type, which is constructed from local solvers for each face of the elements, and a coarse solver related to the wire basket of the elements. The condition number of the preconditioned operator is independent of the number of spectral elements and is bounded from above by the product of the square of the logarithm of the spectral degree and the inverse of the discrete inf-sup constant of the problem.
-
TR1997-747
1997
Iterative Substructuring Methods for Spectral Element Discretizations of Elliptic Systems in Three Dimensions
Pavarino, L. F.;
Widlund, O. B.
Abstract
|
PDF
Title: Iterative Substructuring Methods for Spectral Element Discretizations of Elliptic Systems in Three Dimensions
Author(s): Pavarino, L. F.; Widlund, O. B.
Abstract:
Spectral element methods are considered for symmetric elliptic systems of second-order partial differential equations, such as the linear elasticity and the Stokes systems in three dimensions. The resulting discrete problems can be positive definite, as in the case of compressible elasticity in pure displacement form, or saddle point problems, as in the case of almost incompressible elasticity in mixed form and Stokes equations. Iterative substructuring algorithms are developed for both cases. They are domain decomposition preconditioners constructed from local solvers for the interior of each element and for each face of the elements and a coarse, global solver related to the wire basket of the elements. In the positive definite case, the condition number of the resulting preconditioned operator is independent of the number of spectral elements and grows at most in proportion to the square of the logarithm of the spectral degree. For saddle point problems, there is an additional factor in the estimate of the condition number, namely, the inverse of the discrete inf-sup constant of the problem.
-
Ph.D. Thesis
1997
Pricing and Hedging Volatility Risk in Interest-Rate Derivatives
Porras, Juan
Abstract
|
PDF
Title: Pricing and Hedging Volatility Risk in Interest-Rate Derivatives
Candidate: Porras, Juan
Advisor(s): Avellaneda, Marco
Abstract:
This work addresses the problem of pricing interest-rate derivative securities and the use of quoted prices of traded instruments to calibrate the corresponding interest-rate dynamics. To this end, an arbitrage-free model of interest rate evolution is adopted, for which the local drift will depend on the history of volatility, thus leading to path-dependent pricing. This model is based on the Heath-Jarrow-Morton formulation but, in addition, presupposes that the volatility process is not defined a-priori . This leads to a path-dependent model that can be formulated in a Markovian framework by considering additional state-variables and hence increasing the dimensionality of the computation. Instead of solving the resulting 3-dimensional partial differential equation, an alternative approach, based on conditional expectations of the history of volatility, is taken. This pricing method is applied to a non-linear (adverse volatility) setting, and used as the core of a non-parametric model calibration technique. The algorithm, by performing an optimization over volatility surfaces, finds a volatility surface that matches the market prices of a given set of securities. This method also finds a hedge for volatility risk, using derivative securities as hedging instruments. In particular, we present results obtained for the problem of hedging American swaptions (options on interest-rate swaps) using European swaptions.
The conditional expectation approach is explored further, and found to be of interest in its own right for the pricing of several kinds of path-dependent instruments, providing an alternative to increasing state-space dimension in order to satisfy a Markov property. In particular, we show how this method speeds up the computation of prices for some types of exotic options, while being general enough to apply to both linear and non-linear pricing of portfolios.
-
Ph.D. Thesis
1997
Performance Modeling for Realistic Storage Devices
Shriver, Elizabeth
Abstract
|
PDF
Title: Performance Modeling for Realistic Storage Devices
Candidate: Shriver, Elizabeth
Advisor(s): Siegel, Alan; Wilkes, John
Abstract:
Managing large amounts of storage is difficult and becoming more so as both the complexity and number of storage devices are increasing. One approach to this problem is a self-managing storage system . Since a self-managing storage system is a real-time system, it requires a model that quickly approximates the behavior of the storage device in a workload-dependent fashion. We develop such a model.
Our approach to modeling devices is to model the individual components of the device, such as queues, caches, and disk mechanisms, and then compose the components. To determine the performance of a component, each component modifies the entering workload use patterns and determines the performance from the workload use patterns and the lower-level device behavior. For example, modifying the use patterns allows us to capture the altered spatial locality that occurs when queues reorder their requests.
Our model predicts the device behavior in terms of response time within a 8% relative error for an interesting subset of the domain of devices and workloads. To demonstrate this, the model has been validated with synthetic traces of parallel scientific file system applications and traces of transaction processing applications.
Our contributions to the area of performance modeling for storage devices include the following:
- 1.
- Methods to approximate the positioning time for the disk head of a magnetic disk.
- 2.
- Methods to approximate the queue delay for non-FCFS scheduling algorithms.
- 3.
- Methods to approximate the cache-miss probabilities and the full and partial cache-hit probabilities in the data caches in the I/O path using measures of workload spatial locality.
- 4.
- Methods to approximate the mean seek time and rotational latency of the disk mechanism using measures of workload spatial locality.
- 5.
- An infrastructure for developing a composite model. The infrastructure supports the development of more complicated devices and workloads than we have validated.
Together, these mean that we have analytic methods to approximate the behavior of a set of realistic storage devices.
-
TR1997-736
1997
Overlapping Schwarz Methods for Maxwell's Equations in Three Dimensions
Toselli, A.
Abstract
|
PDF
Title: Overlapping Schwarz Methods for Maxwell's Equations in Three Dimensions
Author(s): Toselli, A.
Abstract:
Two-level overlapping Schwarz methods are considered for finite element problems of 3D Maxwell's equations. Nedelec elements built on tetrahedra and hexahedra are considered. Once the relative overlap is fixed, the condition number of the additive Schwarz method is bounded, independently of the diameter of the triangulation and the number of subregions. A similar result is obtained for a multiplicative method. These bounds are obtained for quasi-uniform triangulations. In addition, for the Dirichlet problem, the convexity of the domain has to be assumed. Our work generalizes well-known results for conforming finite elements for second order elliptic scalar equations.
-
TR1997-750
1997
The Coupling of Mixed and Conforming Finite Element Discretizations
Wieners, C.;
Wohlmuth, B.
Abstract
|
PDF
Title: The Coupling of Mixed and Conforming Finite Element Discretizations
Author(s): Wieners, C.; Wohlmuth, B.
Abstract:
In this paper, we introduce and analyze a special mortar finite element method. We restrict ourselves to the case of two disjoint subdomains, and use Raviart-Thomas finite elements in one subdomain and conforming finite elements in the other. In particular, this might be interesting for the coupling of different models and materials. Because of the different role of Dirichlet and Neumann boundary conditions a variational formulation without a Lagrange multiplier can be presented. It can be shown that no matching conditions for the discrete finite element spaces are necessary at the interface. Using static condensation, a coupling of conforming finite elements and enriched nonconforming Crouzeix-Raviart elements satisfying Dirichlet boundary conditions at the interface is obtained. The Dirichlet problem is then extended to a variational problem on the whole nonconforming ansatz space. It can be shown that this is equivalent to a standard mortar coupling between conforming and Crouzeix-Raviart finite elements where the Lagrange multiplier lives on the side of the Crouzeix-Raviart elements. We note that the Lagrange multiplier represents an approximation of the Neumann boundary condition at the interface. Finally, we present some numerical results and sketch the ideas of the algorithm. The arising saddle point problems is be solved by multigrid techniques with transforming smoothers.
-
TR1997-749
1997
Hierarchical A Posteriori Error Estimators for Mortar Finite Element Methods with Lagrange Multipliers
Wohlmuth, B.
Abstract
|
PDF
Title: Hierarchical A Posteriori Error Estimators for Mortar Finite Element Methods with Lagrange Multipliers
Author(s): Wohlmuth, B.
Abstract:
Hierarchical a posteriori error estimators are introduced and analyzed for mortar finite element methods. A weak continuity condition at the interfaces is enforced by means of Lagrange multipliers. The two proposed error estimators are based on a defect correction in higher order finite element spaces and an adequate hierarchical two-level splitting. The first provides upper and lower bounds for the discrete energy norm of the mortar finite element solution whereas the second also estimates the error for the Lagrange multiplier. It is shown that an appropriate measure for the nonconformity of the mortar finite element solution is the weighted $L^2$-norm of the jumps across the interfaces.
-
TR1996-721
1996
Primal-Dual Interior-Point Methods for Semidefinite Programming: Convergence Rates, Stability and Numerical Results
Alizadeh, F.;
Haeberly, J.A.; Overton, M.L.
Abstract
|
PDF
Title: Primal-Dual Interior-Point Methods for Semidefinite Programming: Convergence Rates, Stability and Numerical Results
Author(s): Alizadeh, F.; Haeberly, J.A.; Overton, M.L.
Abstract:
Primal-dual interior-point path-following methods for semidefinite programming (SDP) are considered. Several variants are discussed, based on Newton's method applied to three equations: primal feasibility, dual feasibility, and some form of centering condition.
The focus is on three such algorithms, called respectively the XZ, XZ+ZX and Q methods. For the XZ+ZX and Q algorithms, the Newton system is well-defined and its Jabobian is nonsingular at the solution, under nondegeneracy assumptions. The associated Schur complement matrix has an unbounded condition number on the central path, under the nondegeneracy assumptions and an additional rank assumption.
Practical aspects are discussed, including Mehrotra predictor-corrector variants and issues of numerical stability. Compared to the other methods considered, the XZ+ZX method is more robust with respect to its ability to step close to the boundary, converges more rapidly, and achieves higher accuracy.
-
Ph.D. Thesis
1996
Algorithms in Semi-Algabraic Geometry
Basu, Saugata
Abstract
|
PDF
Title: Algorithms in Semi-Algabraic Geometry
Candidate: Basu, Saugata
Advisor(s): Pollack, Richard
Abstract:
In this thesis we present new algorithms to solve several very general problems of semi-algebraic geometry. Our algorithms are currently the best algorithms for solving these problems. In addition, we have proved new bounds on the topological complexity of real semi-algebraic sets, in terms of the parameters of the polynomial system defining them, which improve some old and widely used results in this field.
The first part of the thesis deals mainly with the decision problem for the first order theory of real closed fields, and the more general problem of quantifier elimination. We give algorithms which improve the complexity of of all the previously known algorithms for these problems. Moreover, our techniques allow us to prove some purely mathematical theorems on the number of connected components and on the existence of small rational points in a given semi-algebraic set.
The second part of this work deals with connectivity questions of semi-algebraic sets. We develop new techniques in order to give an algorithm for computing roadmaps of semi-algebraic sets which improves on the complexity of the previous algorithms for this problem.
The third part of this work deals with bounding the topological complexity of semi-algebraic sets in terms of the number and the degrees of the polynomials describing it. We extend and improve a classical and widely used result of Oleinik and Petrovsky(1949), Thom (1965) and Milnor(1964), bounding the sum of the Betti numbers of semi-algebraic sets. Using the ideas behind this result, we give the first singly exponential algorithm for computing the Euler characteristic of an arbitrary semi-algebraic set.
One common thread that links these results is that our bounds are separated into a combinatorial part (the part depending on the number of polynomials) and an algebraic part (the part depending on the degrees of the polynomials). The combinatorial part of the complexity of our algorithms is frequently tight and this marks the improvement of many of our results. This is most striking when one considers that in many applications, for instance in computational geometry, it is the number of polynomials which is the most important parameter (the degrees and the number of variables are usually small). Another important and new feature of some of our results is that when the given semi-algebraic set is contained in a lower dimensional variety, the combinatorial part of the complexity depends on the dimension of this variety rather than on the dimension of the ambient space. This is useful when one considers semi-algebraic sets which have low real dimension embedded in a higher dimensional space.
-
TR1996-729
1996
PLinda User Manual
Brown, T.;
Jeong, K.; Li, B.; Talla, S.; Wyckoff, P.; Shasha, D.
Abstract
|
PDF
Title: PLinda User Manual
Author(s): Brown, T.; Jeong, K.; Li, B.; Talla, S.; Wyckoff, P.; Shasha, D.
Abstract:
Persistent Linda (PLinda) is a programming environment for writing fault-tolerant distributed/parallel programs that may be run on networks of workstations. PLinda is a set of extensions to the Linda parallel programming model and PLinda/C++ (and Fortran77 respectively) is an implementation combined with the sequential language C++.
The PLinda User Manual introduces the PLinda model, mechanics of the PLinda operations, and programming in PLinda/C++ and PLinda/Fortran77.
-
TR1996-717
1996
Schwarz Preconditioners for Spectral and Mortar Finite Element Methods with Applications to Incompressible Fluids
Casarin, M.A., Jr.
Abstract
|
PDF
Title: Schwarz Preconditioners for Spectral and Mortar Finite Element Methods with Applications to Incompressible Fluids
Author(s): Casarin, M.A., Jr.
Abstract:
The spectral element method has been used extensively for the simulation of fluid flows. The resulting linear systems are often not amenable to direct methods of solution, and are especially ill-conditioned. Domain decomposition preconditioners, well adapted to the solution on parallel computers, are proposed and analyzed; both two and three space dimensions are considered.
Second-order elliptic equations are considered first, and the now well-developed theory of domain decomposition methods for finite elements is fully extended to spectral elements. This includes an analysis of exotic coarse spaces, which have proven necessary for the efficient solution of elliptic problems with large discontinuities in the coefficients, as well as a study of overlapping methods. Estimates of the condition numbers of the Schur complement restricted to an edge (in two dimensions) or to a face (in three dimensions) are also given; in particular, a fast method is designed and studied in full detail for problems with many subregions.
The Stokes problem, when restricted to the space of discrete divergence free velocities, is symmetric positive definite. A number of preconditioners are proposed, which are based on previous results for the scalar elliptic case, and new global models. The construction of a basis for the constrained velocity space is not required,and the resulting condition numbers grow only weakly with the degree $N$ and are independent of the number of subdomains.
We also consider the stationary Navier-Stokes equations, solved with Newton's method. In each iteration, a non-symmetric indefinite problem is solved using a Schwarz preconditioner. A new coarse space is proposed which satisfies the usual properties required by the elliptic theory, and also a specific $H^1$-approximation property. The rate of convergence of the algorithm grows only weakly with $N$, and does not depend on the number of subdomains, or the Newton step.
Finally, a hierarchical basis preconditioner for the mortar finite element method in two dimensions is proposed and analyzed. It is also further shown that the analysis of the symmetric positive definite preconditioner can also be applied to construct preconditioners for symmetric indefinite problems arising from second-order elliptic equations. Numerical results are presented for the Helmholtz equation.
-
TR1996-725
1996
Building a Fast Double-Dummy Bridge Solver
Chang, M-S.
Abstract
|
PDF
Title: Building a Fast Double-Dummy Bridge Solver
Author(s): Chang, M-S.
Abstract:
Compared to other games, particularly chess, the research in computer bridge is immature, and the best bridge-playing programs are mediocre. In this paper we address the problem of designing a fast double-dummy bridge game (i.e., a simplified bridge game with perfect information) solver. Although th size of the game tree we generated for searching the best line of play is huge (about on the order of $13! \cdot 2^{39} \approx 10^{21}$, even if we assume the average branching factor for players to follow suit is just 2), we show that, through varieties of searching techniques and some efficient moves ordering and pruning heuristics, most double-dummy bridge hands can be solved within a reasonable amount of time. In this paper we first give a brief introduction to computer bridge and previous work on the card-playing phase of bridge. Next, we describe the top-level architecture of our double-dummy solver (dds), followed by a number of implementation techniques we employed in our dds. Finally we present experimental results, draw our conclusion and describe some future work toward automating card-playing in real bridge.
-
TR1996-730
1996
An O(n log n) Algorithm for the Maximum Agreement Subtree Problem for Binary Trees
Cole, R.;
Farach, M.; Hariharan, R.; Przytycka, T.; Thorup, M.
Abstract
|
PDF
Title: An O(n log n) Algorithm for the Maximum Agreement Subtree Problem for Binary Trees
Author(s): Cole, R.; Farach, M.; Hariharan, R.; Przytycka, T.; Thorup, M.
Abstract:
The Maximum Agreement Subtree problem is the following:
Given two trees whose leaves are drawn from the same set of items (e.g., species), find the largest subset of these items so that the portions of the two trees restricted to these items are isomorphic. We consider the case which occurs frequently in practice, i.e., the case when the trees are binary, and give an O(n log n) time algorithm for this problem.
-
TR1996-731
1996
Tree Pattern Matching and Subset Matching in Randomized O(n log^3 m) Time
Cole, R.;
Hariharan, R.
Abstract
|
PDF
Title: Tree Pattern Matching and Subset Matching in Randomized O(n log^3 m) Time
Author(s): Cole, R.; Hariharan, R.
Abstract:
The main goal of this paper is to give an efficient algorithm for the Tree Pattern Matching problem. We also introduce and give an efficient algorithm for the Subset Matching problem.
The Subset Matching problem is to find all occurrences of a pattern string p of length m in a text string t of length n, where each pattern and text location is a set of characters drawn from some alphabet. The pattern is said to occur at text position i if the set p[j] is a subset of the set t[i+j-1], for all j, 1 <= j <= m. We give an O((s+n)\log^3 m) randomized algorithm for this problem, where s denotes the sum of the sizes of all the sets.
Then we reduce the Tree Pattern Matching problem to a number of instances of the Subset Matching problem. This reduction takes linear time and the sum of the sizes of the Subset Matching problems obtained is also linear. Coupled with our first result, this implies an O(nlog^3 m) time randomized of metric spaces are also considered.
-
TR1996-724
1996
Two Heuristics for the Steiner Tree Problem
Dreyer, D. R.;
Overton, M.L.
Abstract
|
PDF
Title: Two Heuristics for the Steiner Tree Problem
Author(s): Dreyer, D. R.; Overton, M.L.
Abstract:
The Steiner tree problem is to find the tree with minimal Euclidean length spanning a set of fixed points in the plane, given the ability to add points (Steiner points). The problem is NP-hard, so polynomial-time heuristics are desired. We present two such heuristics, both of which utilize an efficient method for optimizing a tree with a given topology. The first systematically inserts Steiner points between edges of the minimal spanning tree meeting at angles less than 120 degrees, performing a local optimization at the end. The second begins by finding the Steiner tree for three of the fixed points. Then, at each iteration, it introduces a new fixed point to the tree, connecting it to each possible edge by inserting a Steiner point, and minimizes over all connections, performing a local optimizations for each. We present a variety of test cases that demonstrate the strengths and weaknesses of both algorithms.
-
Ph.D. Thesis
1996
Statistical Source Channel Models for Natural Language Understanding
Epstein, Mark
Abstract
|
PDF
Title: Statistical Source Channel Models for Natural Language Understanding
Candidate: Epstein, Mark
Advisor(s): Grishman, Ralph
Abstract:
The problem of Natural Language Understanding (NLU) has intrigued researchers since the 1960's. Most researchers working in computational linguistics focus on linguistic solutions to their problems. They develop grammars and parsers to process the input natural language into a meaning representation . In this thesis, a new approach is utilized. Borrowing from the field of communication theory , an information theoretic approach to natural language understanding is applied. This is based on the source-channel model of communication.
The source-channel model of NLU assumes that the user has a meaning in the domain of the application that he wishes to convey. This meaning is sent through a noisy channel . The observer receives the English sentence as output from the noisy channel. The observer then submits the English sentence to a decoder , which determines the meaning most likely to have generated the English. The decoder uses mathematical models of the channel and the meanings to process the English sentence. Thus, the following problems must be addressed in a source-channel model for NLU:
- 1.
- A mathematical model of the noisy-channel must be developed.
- 2.
- The parameters of the model must be set, either manually or by an automatic training procedure.
- 3.
- A decoder must be built to search through the meaning space for the most likely meaning to have generated the observed English.
This dissertation focuses on the first two of these problems. Several mathematical models of the noisy channel are developed. They are trained from a corpus of context independent sentence pairs consisting of both English and the corresponding meaning. The parameters of the models are trained to maximize the likelihood of the model's prediction of the observed training data using Dempster and Laird's Expectation-Maximization algorithm . Results are presented for the Air Travel Information Service (ATIS) domain.
-
TR1996-720
1996
A Model and Solution to the DNA Flipping String Problem
Geiger, D.;
Parida, L.
Abstract
|
PDF
Title: A Model and Solution to the DNA Flipping String Problem
Author(s): Geiger, D.; Parida, L.
Abstract:
We consider the case where a pool of DNA molecules clones both, flipped and not-flipped, have been cut by restriction enzymes. Ideally, each clone is cut in the same positions, although in practice due to errors, this does not always happen. The computational problem is to determine where the cuts have occurred.
This is a key problem in determining the structure of the original DNA molecule.
A single molecule is represented by a string of 1's and 0's, with cuts represented by $1's$. A set of molecules clones (with errors) is observed, but the orientation/parity of each molecule is unknown. Clear is that the location of the observed cuts of one molecule are dependent on the parity: flipping the molecule would result in the cuts location, as observed, being ``flipped'' .
We propose a Bayesian approach to generate a posterior distribution on the cuts and parity, given the data. We first present an approximate algorithm where we attempt to divide the problem into subproblems, but it is not guaranteed to solve the problem. Then, we propose another approximate method based on a statistical framework and a mean field annealing algorithm. It computes the maximum posterior marginal (MPM estimator) and maximum aposteriori estimate (MAP estimator).
We also provide evidence that the exact solution of the problem is intractable.
-
TR1996-727
1996
Hierarchically Split Cube Forests for Decision Support: description and tuned design
Johnson, T.;
Shasha, D.
Abstract
|
PDF
Title: Hierarchically Split Cube Forests for Decision Support: description and tuned design
Author(s): Johnson, T.; Shasha, D.
Abstract:
The paradigmatic view of data in decision support consists of a set of dimensions (e.g., location, product, time period, ...), each encoding a hierarchy (e.g., location has hemisphere, country, state/province, ..., block). Typical queries consist of aggregates over a quantifiable attribute (e.g., sales) as a function of at most one attribute in each dimension of this ``data cube.'' For example, find the sum of all sales of blue polo shirts in Palm Beach during the last quarter. In this paper, we introduce an index structure for storing and indexing aggregates, called ``cube forests,'' to support such cube queries efficiently --- one index search is usually enough.
In their most general form, cube forests require a lot of space. So, we present an optimized structure, called ``hierarchically split cube forests'' that exploit the hierarchical nature of the data to save space. We then present a model and algorithms to arrive at designs that further reduce update time, but suffer an increase in query time. Our experiments bear out the model and show that the structure has promise for decision support applications in read-intensive environments.
-
TR1996-716
1996
Preconditioners for Indefinite Problems
Klawonn, A.
Abstract
|
PDF
Title: Preconditioners for Indefinite Problems
Author(s): Klawonn, A.
Abstract:
Two different preconditioners for symmetric saddle point problems with a penalty term are analyzed. The saddle point problems are discretized by mixed finite elements. The preconditioners are applied in combination with Krylov space methods. It is shown that both methods yield convergence rates that are independent from both, the discretization and the penalty parameters. The first method is based on a symmetric positive definite block-diagonal preconditioner and the second one uses a non-symmetric and indefinite block-triangular preconditioner. Numerical results are presented for a problem of linear elasticity. The preconditioners in our experiments are based on domain decomposition and multilevel techniques. It is further shown that the analysis of the symmetric positive definite preconditioner can also be applied to construct preconditioners for symmetric indefinite problems arising from second-order elliptic equations. Numerical results are presented for the Helmholtz equation.
-
TR1996-715
1996
Skip-Over: Algorithms and Complexity for Overloaded Systems that Allow Skips
Koren, G.;
Shasha, D.
Abstract
|
PDF
Title: Skip-Over: Algorithms and Complexity for Overloaded Systems that Allow Skips
Author(s): Koren, G.; Shasha, D.
Abstract:
In applications ranging from video reception to telecommunications and packet communication to aircraft control, tasks enter periodically and have fixed response time constraints, but missing a deadline is acceptable, provided most deadlines are met. We call such tasks ``occasionally skippable''. We look at the problem of uniprocessor scheduling of occasionally skippable periodic tasks in an environment having periodic tasks. We show that making optimal use of skips is NP-hard. We then look at two algorithms called Skip-Over Algorithms (one a variant of earliest deadline first and one of rate monotonic scheduling) that exploit skips. We give schedulability bounds for both.
-
TR1996-723
1996
Highly Efficient Instruction Scheduling of Realtime Programs on RISC Processors
Leung, A.;
Palem, K.V.; Pnueli, A.
Abstract
|
PDF
Title: Highly Efficient Instruction Scheduling of Realtime Programs on RISC Processors
Author(s): Leung, A.; Palem, K.V.; Pnueli, A.
Abstract:
Enabled by RISC technologies, low-cost commodity microprocessors are performing at ever increasing levels, significantly via instruction level parallelism (ILP). This in turn increases the opportunities for their use in a variety of day-to-day applications ranging from the simple control of appliances such as microwave ovens, to sophisticated systems for cabin control in modern aircraft. Indeed, ``embedded'' applications such as these represent segments in the computer industry with great potential for growth. However, this growth is currently impeded by the lack of robust optimizing compiler technologies that support the assured, rapid and inexpensive prototyping of real-time software in the context of microprocessors with ILP. In this paper, we will present fast (polynomial-time) algorithms for compile-time instruction scheduling, of programs instrumented with timing-constraints, on processors with ILP. Our algorithms can be distinguished from earlier work in that they are guaranteed to find feasible schedules --- those satisfying the timing-constraints --- whenever such schedules exist, in cases of practical interest. Consequently, they can serve as powerful engines and can simultaneously support the ``analysis'' of the program prior to compilation, as well as during compilation once a feasible schedule is identified via analysis. We will also describe a novel notation, Time_tract, for specifying timing-constraints in programs, independent of the base language being used to develop the embedded application; Time_tract specifications are language independent and can be instrumented into imperative and object-oriented languages non-intrusively. As we will show, the instruction scheduling questions that arise out of Time_tract specifications are always ``tractable''. In contrast, a range of specification mechanisms proposed earlier yield substantially intractable instruction scheduling questions, thereby limiting their potential utility. We will sketch a formal and precise comparison of the tractability and related expressive power issues between Time_tract and some of the extant mechanisms for specifying properties of timed programs; this will be done using the canonical framework of timed-automata.
-
TR1996-718
1996
CoRRet: A CONSTRAINT Based Environment for Rapid Prototyping Real Time Programs
Palem, K.
Abstract
|
PDF
Title: CoRRet: A CONSTRAINT Based Environment for Rapid Prototyping Real Time Programs
Author(s): Palem, K.
Abstract:
The information revolution that we are in the midst of has led to the use of computers controlling applications ranging from automobiles and games, to video-pumps in the information highway. These applications are distinguished by the fact that they use programs with special timing relationships between their constituent elements. For example, a program running in the microprocessor controlling an ABS system in a modern automobile must sense and react to the friction coefficient between the brake pads and the wheel at well-defined intervals of time; failure to do so will result in a systemic failure of the brakes. Referred to typically as embedded systems, these applications constitute a significant portion of the potential growth in the computer industry. However, this growth opportunity is being hampered by a lack of adequate support via software development tools, to aid the easy, rapid and correct prototyping of embedded applications.
In this report, we outline CoRReT, a COnstraint based environment for the Rapid prototyping of REal Time programs. The report outlines the overall system architecture as well as the key modules in this environment that are being currently developed. CoRReT is a scheduling centric system in that a suite of algorithms for instruction scheduling programs instrumented with real-time constraints, are at its core. These algorithms are an integral part of an (optimizing) compiler which will compile these programs automaticallywhile attempting to ensure that the timing constraints are met; when the constraints are met, the resulting schedule for the instructions is referred to be feasible. If a feasible schedule is found, it will be fed automatically into a code-generator in the back-end of the compiler. Our envisioned scheduler can --- in addition to traditional control- and data-dependence constraints in the source program --- also cope with a variety of timing constraints specified by the programmer.
Our focus is on computational platforms that embody parallelism at two levels of granularity. At the highest level, we envision a tightly-coupled parallel machine offering large-scale parallelism. In this setting, a single embedded application can be distributed across the individual processors of the cluster. Furthermore, each processor in this parallel machine can embody Instruction Level Parallelism (ILP) at a fine-grained level.
Unfortunately, due to a lack of automatic tools and technology that can provide compilation support for real-time constraints ubiquitous to embedded applications, parallel computing platforms have not proliferated in this setting. Considering the fine-grained case first, RISC processors with ILP have not yet found a niche in this domain; currently, developers of embedded systems are reluctant to embrace ILP technologies due to the onerous task of ensuring timing relationships in the program by hand --- a difficulty compounded by parallelism (at a fine-grained level) in the processor. Clearly, providing support through automation that frees the programmer of these difficulties, is a means of overcoming this challenge.
Our response to this challenge via CoRReT is to develop scheduling methodologies and tools for automatically harnessing very high performance from these platforms, in the context of embedded systems. In the absence of time-constraints, major progress has been achieved in this direction at the coarse-grained level. The situation is even better at the fine-grained level where scheduling technology is being used routinely in product-quality compilers for RISC processors.
The methodology on which CoRReT is based is independent of any particular target processor, and is applicable to third and fourth generation languages. Furthermore, we propose to use the same scheduling engines during the static analysis of the program as well as during compilation. We anticipate this ``confluence'' in the scheduling algorithms to aid in shorter prototyping cycles, since identical schedules will be used by the analysis tools and the back-end of the compiler to generate code. We envision that the algorithms and tools that go into CoRReT will naturally form an integral part of a full-fledged programming environment for prototyping real-time programs on parallel platforms.
-
Ph.D. Thesis
1996
Solving the Navier-Stokes Equations on a Distributed Parallel Computer
Sabbagh, Hadil
Abstract
|
PDF
Title: Solving the Navier-Stokes Equations on a Distributed Parallel Computer
Candidate: Sabbagh, Hadil
Advisor(s): Peskin, Charles S.
Abstract:
Speed and space are two major issues in computational fluid dynamics today. Scalable parallel or distributed computers offer the promise of faster time to solve problems through parallelism and solutions to larger problems by adding more parallel processors with their own private memories. These systems use message passing to share data between processors. Parallel programs are difficult to write, especially for message passing systems, and there are few well-studied test cases.
In this dissertation, we solve the incompressible Navier-Stokes equations on a periodic cubic domain (3-torus). The numerical method is a finite difference method that consists of two parts: upwind differencing applied to the non-linear terms and solution of the Stokes equations. The latter are solved implicitly using a three-dimensional FFT. For the parallel implementation, the domain is divided into equally sized non-periodic cubic subdomains. Each subdomain is assigned to a processor; the processors form a process grid which is periodic. The parallel upwind differencing is preceded by an exchange of face data. The discrete Fourier transform in the Stokes solver is computed by applying one-dimensional FFTs sequentially in the three coordinate directions. In each coordinate direction, data must be exchanged only among those processors that lie on the same line of the process grid.
The parallel algorithm was implemented twice: once using PVM and once using MPI. Although both implementations are described in the thesis, the performance of only the MPI version is discussed.
The Navier-Stokes solver is tested on the IBM SP-2. Three constant problem size and three scalability experiments are used to analyze the performance of the solver. The fluid solver achieves a speedup of 48.8 when solving a 240 * 240 * 240 problem on 216 processors. Furthermore, there is evidence of scalability.
-
TR1996-719
1996
NYU Reactive Gripper: An Implementation
Teichmann, M.;
Mishra, B.
Abstract
|
PDF
Title: NYU Reactive Gripper: An Implementation
Author(s): Teichmann, M.; Mishra, B.
Abstract:
We consider the problem of grasping an unknown polygonal flat object using a parallel jaw gripper. Our design equips a standard gripper with several light-beam sensors (close to each jaw) and employs a control scheme based on a reactive grasping algorithm. This is done by probing the object to locate a good grasp position, and then grasping, without moving the object significantly. The goal is to do as little motion as possible to find a grasp. In this paper, we discuss an implementation of this device using NYU's MOSAIC robot, following a quick overview of the underlying reactivity principle.
-
TR1996-726
1996
Some Numerical Results Using An Additive Schwarz Method For Maxwell's Equations
Toselli, A.
Abstract
|
PDF
Title: Some Numerical Results Using An Additive Schwarz Method For Maxwell's Equations
Author(s): Toselli, A.
Abstract:
We present some numerical results for a two-level additive overlapping Schwarz method applied to the 2-D Maxwell's equations. Nedelec finite elements defined on rectangles are employed. Numerical examples show the convergence properties of the method, when varying the mesh size of the coarse and fine problems, the overlap and the time step of the implicit finite difference scheme employed.
-
TR1996-722
1996
A Note on Scheduling Algorithms for Processors with Lookahead
Ungureanu, C.
Abstract
|
PDF
Title: A Note on Scheduling Algorithms for Processors with Lookahead
Author(s): Ungureanu, C.
Abstract:
Many superscalar processors designed today are able to dynamically schedule instructions. Dynamic scheduling means that a processor is able to analyze a portion of the instruction stream ``on the fly'', and has the capability of issuing an instruction other than the next one available in the input, in order to avoid stalling. Such an instruction is said to be executed out of order.
Scheduling algorithms for machines with in-order execution are used in most compilers today. However, schedules which are optimal for machines with in-order execution may be sub-optimal for a machine with out-of-order execution. Here, we are presenting an algorithm which produces a local schedule for a trace of basic blocks, such that the completion time is minimized for a processor with a depth of the pipeline k=2 and dynamic scheduling ability with scope size s=2. The algorithm runs in polynomial time. A generalization of the algorithm to work for machines with larger scopes is straightforward.
-
TR1996-728
1996
Formal Models of Distributed Memory Management
Ungureanu, C.;
Goldberg, B.
Abstract
|
PDF
Title: Formal Models of Distributed Memory Management
Author(s): Ungureanu, C.; Goldberg, B.
Abstract:
We develop an abstract model of memory management in distributed systems. The model is low-level enough so we can express communication, allocation and garbage collection, but otherwise hide many of the lower-level details of an actual implementation.
Recently, such formal models have been developed for memory management in a functional, sequential setting by Morrisett, Felleisen, and Harper. The models are rewriting systems whose terms are programs. Programs have both the "code" (control string) and the "store" syntactically apparent. Evaluation is expressed as conditional rewriting and includes store operations. Garbage collection becomes a rewriting relation that removes part of the store without affecting the behavior of the program.
By using techniques developed for communicating and concurrent systems such as Milner's CCS, we extend the model for a distributed environment. Sending and receiving messages is also made apparent at the syntactic level. A very general garbage collection rule based on reachability is introduced and proved correct. Now, proving correct a specific collection strategy is reduced to showing that the relation between programs defined by the strategy is a subrelation of the general relation. Any actual implementation which is capable of providing the transitions (including their atomicity constraints) specified by the strategy is therefore correct.
-
TR1995-681
1995
Complementarity and Nondegeneracy in Semidefinite Programming
Alizadeh, F.;
Haeberly, J.; Overton, M.
Abstract
|
PDF
Title: Complementarity and Nondegeneracy in Semidefinite Programming
Author(s): Alizadeh, F.; Haeberly, J.; Overton, M.
Abstract:
Primal and dual nondegeneracy conditions are defined for semidefinite programming. Given the existence of primal and dual solutions, it is shown that primal nondegeneracy implies a unique dual solution and that dual nondegeneracy implies a unique primal solution. The converses hold if strict complementarity is assumed. Primal and dual nondegeneracy assumptions do not imply strict complementarity, as they do in LP. The primal and dual nondegeneracy assumptions imply a range of possible ranks for primal and dual solutions $X$ and $Z$. This is in contrast with LP where nondegeneracy assumptions exactly determine the number of variables which are zero. It is shown that primal and dual nondegeneracy and strict complementarity all hold generically. Numerical experiments suggest probability distributions for the ranks of $X$ and $Z$ which are consistent with the nondegeneracy conditions.
-
TR1995-682
1995
Computing Limit Loads by Minimizing a Sum of Norms
Andersen, K.;
Christiansen, E.; Overton, M.
Abstract
|
PDF
Title: Computing Limit Loads by Minimizing a Sum of Norms
Author(s): Andersen, K.; Christiansen, E.; Overton, M.
Abstract:
We consider the problem of computing the collapse state in limit analysis for a solid with a quadratic yield condition, such as, for example, the Mises condition. After discretization with the finite element method, using divergence-free elements for the plastic flow, the kinematic formulation turns into the problem of minimizing a sum of Euclidean vector norms, subject to a single linear constraint. This is a nonsmooth minimization problem, since many of the norms in the sum may vanish at the optimal point. However, efficient solution algorithms for this particular convex optimization problem have recently been developed.
The method is applied to test problems in limit analysis in two different plane models: plane strain and plates. In the first case more than 80 percent of the terms in the sum are zero in the optimal solution, causing severe ill-conditioning. In the last case all terms are nonzero. In both cases the algorithm works very well, and we solve problems which are larger by at least an order of magnitude than previously reported. The relative accuracy for the discrete problems, measured by duality gap and feasibility, is typically of the order 1.0E-8. The discretization error, due to the finite grid, depends on the nature of the solution. In the applications reported here it ranges from 1.0E-5 to 1.0E-2.
Keywords: Limit analysis, plasticity, finite element method, nonsmooth optimization.
-
TR1995-707
1995
The Supervisor Synthesis Problem for Unrestricted CTL is NP-complete
Antoniotti, M.;
Mishra, B.
Abstract
|
PDF
Title: The Supervisor Synthesis Problem for Unrestricted CTL is NP-complete
Author(s): Antoniotti, M.; Mishra, B.
Abstract:
The problem of restricting a finite state model (a Kripke structure) in order to satisfy a set of unrestricted CTL formulae is named the ``Unrestricted CTL Supervisor Synthesis Problem''. The finite state model has the characteristics described in \cite{ramadge-wonham87}, that is, its transitions are partitioned between "controllable" and "uncontrollable" ones. The set of CTL formulae represents a specification of the "desired behavior" of the system, which may be achieved through a "control action". This note shows the problem to be NP-complete.
-
Ph.D. Thesis
1995
Synthesis and Verification of Controllers for Robotics and Manufacturing Devices with Temporal Logic and the "Control-D" System
Antoniotti, Marco
Abstract
|
PDF
Title: Synthesis and Verification of Controllers for Robotics and Manufacturing Devices with Temporal Logic and the "Control-D" System
Candidate: Antoniotti, Marco
Advisor(s): Mishra, Bud
Abstract:
This dissertation studies the semi-automated synthesis and verification of control systems for robotics and manufacturing devices using formal methods in a discrete framework, and bears some resemblance to the theory of controlled discrete event systems (CDES) of Ramadge and Wonham. The discrete controller components of a walking machine and of a manufacturing line in the Combat Ration Advanced Manufacturing Technology Demonstration (CRAMTD) of Rutgers University are constructed automatically using the algorithms developed here.
The goal of this research has been to facilitate the integration of CDES theory with the the specification and verification formalisms for finite state systems. Many of our techniques rely on the application of some flavor of temporal logic . In particular, the model-checking techniques of Clarke and Emerson, for branching time temporal logic, proved to be valuable in the implementation of a controller synthesis tool for CDES, called Control-D . The main synthesis algorithm used by the Control-D tool compares favorably with the Ramadge-Wonham algorithm in time and space complexity, while achieving improved expressiveness in its underlying specification language.
-
TR1995-712
1995
A Hierarchical Preconditioner for the Mortar Finite Element Method
Casarin, M. A.;
Widlund, O. B.
Abstract
|
PDF
Title: A Hierarchical Preconditioner for the Mortar Finite Element Method
Author(s): Casarin, M. A.; Widlund, O. B.
Abstract:
Mortar elements form a family of nonconforming finite element methods that are more flexible than conforming finite elements and are known to be as accurate as their conforming counterparts. A fast iterative method is developed for linear, second order elliptic equations in the plane. Our algorithm is modeled on a hierarchical basis preconditioner previously analyzed and tested, for conforming case, by Barry Smith and the second author. A complete analysis and results of numerical experiments are given for lower order mortar elements and geometrically conforming decompositions of the region into subregions.
-
TR1995-704
1995
Diagonal Edge Preconditioners in p-Version and Spectral Element Methods
Casarin, M. A.
Abstract
|
PDF
Title: Diagonal Edge Preconditioners in p-Version and Spectral Element Methods
Author(s): Casarin, M. A.
Abstract:
-
TR1995-705
1995
Quasi-Optimal Schwarz Methods for the Conforming Spectral Element Discretization
Casarin, M. A.
Abstract
|
PDF
Title: Quasi-Optimal Schwarz Methods for the Conforming Spectral Element Discretization
Author(s): Casarin, M. A.
Abstract:
The spectral element method is used to discretize self-adjoint elliptic equations in three dimensional domains. The domain is decomposed into hexahedral elements, and in each of the elements the discretization space is the set of polynomials of degree $N$ in each variable. A conforming Galerkin formulation is used, the corresponding integrals are computed approximately with Gauss-Lobatto-Legendre (GLL) quadrature rules of order $N$, and a Lagrange interpolation basis associated with the GLL nodes is used. Fast methods are developed for solving the resulting linear system by the preconditioned conjugate gradient method. The conforming {\it finite element} space on the GLL mesh, consisting of piecewise $Q_{1}$ or $P_1$ functions, produces a stiffness matrix $K_h$ that is known to be spectrally equivalent to the spectral element stiffness matrix $K_N$. $K_h$ is replaced by a preconditioner $\tilde{K}_h$ which is well adapted to parallel computer architectures. The preconditioned operator is then $\tilde{K}_h^{-1} K_N$.
Our techniques for non-regular meshes make it possible to estimate the condition number of $\tilde{K}_h^{-1} K_N$, where $\tilde{K}_h$ is a standard finite element preconditioner of $K_h$, based on the GLL mesh. The analysis of two finite element based preconditioners: the wirebasket method of Smith, and the overlapping Schwarz algorithm for the spectral element method, are given as examples of the use of these tools. Numerical experiments performed by Pahl are briefly discussed to illustrate the efficiency of these methods in two dimensions.
-
Ph.D. Thesis
1995
Planning in an Imperfect World Using Previous Experiences
Chiu, Jen-Lung
Abstract
|
PDF
Title: Planning in an Imperfect World Using Previous Experiences
Candidate: Chiu, Jen-Lung
Advisor(s): Davis, Ernest
Abstract:
This thesis studies the problem of planning and problem solving in an unpredictable environment by adapting previous experiences. We construct a single agent planning system CADDY and operate it in a simple golf world testbed. The study of CADDY combines the studies of probabilistic, spatial, and temporal reasoning, adapting and reusing plans, and the tradeoff between gains and costs based on various considerations.
The CADDY planning system operates in an uncertain and unpredictable environment. Despite limited perception, incomplete knowledge, and imperfect motion control, CADDY achieves its goal efficiently by finding a plan that is already known to work well in a similar situation and applying repair heuristics to improve it. The capability of adapting experiences makes CADDY a planning system with learning capability.
In this thesis, we discuss the structure of the CADDY planning system and the results of experimental tests of CADDY when we applied to a simulated golf world. We compare CADDY with several other research projects on probabilistic planners and planners which utilizes experiences. We also discuss how CADDY can be characterized in terms of theoretical work on plan feasibility. Finally, we point out possible directions of system extension and generalizations of the idea learned from CADDY to other problem domains. Currently CADDY is not directly applied to real-world problems, but it shows an interesting and promising direction of study. By combining the techniques of probabilistic reasoning, planning, and learning, the performance of planning on real-world domains can be improved dramatically.
-
Ph.D. Thesis
1995
Geodesic Problems in High Dimensions
Choi, Joonsoo
Abstract
|
PDF
Title: Geodesic Problems in High Dimensions
Candidate: Choi, Joonsoo
Advisor(s): Yap, Chee
Abstract:
The geometric shortest path (geodesic) problem can be formulated as follows: given a collection of obstacles in
$\R^d$
, and source and target points$s, t \in \R^d$
, find a shortest obstacle-avoiding path between s and t . This thesis studies the Euclidean geodesic problem in$\R^3$
with polyhedral obstacles and the rectilinear geodesic problem in$\R^d$
with pairwise-disjoint, axes-parallel boxes.Computing Euclidean geodesics in
$\R^3$
with polyhedral obstacles is known to be NP -hard. In contrast, Papadimitriou gave a polynomial-time approximation algorithm for this problem. Unfortunately his complexity analysis involves an unusual mixture of both the algebraic computing model and the bit computing model. In the first part of the thesis, we present a true bit complexity analysis: there is an approximation algorithm that computes a geodesic with relative error$\epsilon > 0$
in$ O((n^3M\log{M} + (nM)^2) \cdot \mu(W)) $
time, where$M=O(nL/\epsilon)$
,$W = O(\log(n/\epsilon)+L)$, and $\mu(W)$
is the time complexity of multiplying two W -bit integers. Our algorithm is a variant of Papadimitriou's algorithm.The second part of the thesis addresses the rectilinear geodesic problem in
$\R^3$
with a set of pairwise-disjoint, axes-parallel boxes. A monotonicity property of rectilinear geodesics is shown: every obstacle-avoiding geodesic between two points is monotone along at least one of coordinate directions. Using the monotonicity property of geodesics, an algorithm computing a geodesic from a query point to a fixed point is presented. The preprocessing time of the algorithm is$O(n^2 \log n)$
and each query takes$O(\log n +k)$
time, where k is the number of edges in a geodesic.The last part of the thesis generalizes the above monotonicity property to every dimensions: given a set of pairwise-disjoint, axes-parallel boxes in
$\R^d$
, every obstacle-avoiding geodesic between two points is monotone along at least one of coordinate directions. -
TR1995-701
1995
On the Dynamic Finger Conjecture for Splay Trees Part II: The Proof
Cole, R.
Abstract
|
PDF
Title: On the Dynamic Finger Conjecture for Splay Trees Part II: The Proof
Author(s): Cole, R.
Abstract:
The following result is shown: On an n-node splay tree, the amortized cost of an access at distance d from the preceding access is O(log (d+1)). In addition, there is an O(n) initialization cost. The accesses include searches, insertions and deletions.
-
TR1995-700
1995
On the Dynamic Finger Conjecture for Splay Trees Part I: Splay Sorting log n-Block Sequences
Cole, R.;
Mishra, B.; Schmidt, J.; Siegel, A.
Abstract
|
PDF
Title: On the Dynamic Finger Conjecture for Splay Trees Part I: Splay Sorting log n-Block Sequences
Author(s): Cole, R.; Mishra, B.; Schmidt, J.; Siegel, A.
Abstract:
A special case of the Dynamic Finger Conjecture is proved; this special case introduces a number of useful techniques.
-
TR1995-711
1995
The Average Case Complexity of Multilevel Syllogistic
Cox, J.;
Ericson, L.; Mishra, B.
Abstract
|
PDF
Title: The Average Case Complexity of Multilevel Syllogistic
Author(s): Cox, J.; Ericson, L.; Mishra, B.
Abstract:
An approach to the problem of developing provably correct programs has been to enrich a theorem prover for Hoare logic with decision procedures for a number of decidable sublanguages of set theory (EMLS, MLS, and extensions) and arithmetic (FPILP) (See [Schwartz, 1977]). Citing results of Goldberg (refer to [Goldberg, 79]) on average case behavior of algorithms for SAT, it was hoped that these decision procedures would perform well on average.
So far, it has been fairly difficult to prove average case NP-hardness under the various definitions (see [Levin, 86], [Ben-David et al, 89], [Blass & Gurevich, 91], [Gurevich, 91], [Venkatesan & Rajagopalan, 92], [Schuler & Yamakami, 92] and [Reischuk & Schindelhauer, 93]). We should note that the definitions in the literature haven't yet been standardized. We survey some of the results of the average case analysis of NP-complete problems, and compare the results of Goldberg with more pessimistic results. We prove that FPILP, EMLS and related fragments of set theory are NP-average complete, and show that there are simple distributions that will frustrate any algorithm for these decision problems.
-
TR1995-714
1995
A Highly Expressive Language of Spatial Constraints
Davis, E.
Abstract
|
PDF
Title: A Highly Expressive Language of Spatial Constraints
Author(s): Davis, E.
Abstract:
AI applications require the representation and manipulation of partial spatial knowledge of many different kinds. This paper argues that a representation rich in primitives but fairly restricted in logical form will suffice for many of these purposes. We present and discuss one such representation language. We demonstrate that the language is expressive enough to capture exactly or closely approximate many of the representations that have been used in the AI literature. It also contains some original constructs for dealing with collections of regions of unknown cardinality.
-
TR1995-706
1995
Approximation and Abstraction in Solid Object Kinematics
Davis, E.
Abstract
|
PDF
Title: Approximation and Abstraction in Solid Object Kinematics
Author(s): Davis, E.
Abstract:
Physical reasoning often involves approximating or abstracting the situation or the theory at hand. This paper studies the nature of approximation and abstraction as applied to the kinematic theory of rigid solid objects.
Five categories of approximation are considered: 1. Geometric approximation. 2. Abstraction of a complex kinematic structure by a simpler kinematic structure. For example, the abstraction of a collection of tightly-linked objects as a single object. 3. Abstraction of a kinematic structure by a simpler theory. For example, the abstraction by a connectivity graph in configuration space. 4. Approximation of a complex kinematic structure by a simpler structure in a more complex theory. For example, the approximation of a chain by a string. 5. Approximation of a more complex theory by a kinematic theory. For example, the approximation of solid object dynamics by kinematics.
We discuss how some of these types of approximation can be implemented and combined. We conclude that abstraction and approximation are open-ended styles of reasoning, rather than neatly categorizable meta-relationships.
-
TR1995-703
1995
Approximations of Shape and Configuration Space
Davis, E.
Abstract
|
PDF
Title: Approximations of Shape and Configuration Space
Author(s): Davis, E.
Abstract:
We consider the issue of shape approximation in kinematic mechanical systems; that is, systems of rigid solid objects whose behavior can be characterized entirely in terms of the constraints that each object moves rigidly and that no two objects overlap, without considering masses or forces. The general question we address is the following: Suppose we have calculated the behavior of some kinematic system using ideal descriptions of the shapes of the objects involved. Does it then follow that a real mechanism, in which the shape of the objects approximates this ideal will have a similar behavior? In addressing this question, we present various possible definitions of what it means (a) for one shape to approximate another and (b) for the behavior of one mechanism to be similarto the behavior of another. We characterize the behavioral properties of a kinematic system in terms of its configuration space; that is, the set of physically feasible positions and orientations of the objects. We prove several existential theorems that guarantee that a sufficiently precise approximation of shape preserves significant properties of configuration space. In particular, we show that It is often possible to guarantee that the configuration space of system A is close to that of system B in terms of metric criteria by requiring that the shapes of A closely approximate those of B in terms of the dual-Hausdorff distance. It is often possible to guarantee further that the configuration space of A is topologically similar to that of B by requiring that the surface normals are close at corresponding boundary points of A and B.
-
Ph.D. Thesis
1995
Practical Structures for Parallel Operating Systems
Edler, Jan
Abstract
|
PDF
Title: Practical Structures for Parallel Operating Systems
Candidate: Edler, Jan
Advisor(s): Gottlieb, Allan
Abstract:
Large shared memory MIMD computers, with hundreds or thousands of processors, pose special problems for general purpose operating system design. In particular:
- 1.
- Serial bottlenecks that are insignificant for smaller machines can seriously limit scalability.
- 2.
- The needs of parallel programming environments vary greatly, requiring a flexible model for runtime support.
- 3.
- Frequent synchronization within parallel applications can lead to high overhead and bad scheduling decisions.
Because of these difficulties, the algorithms, data structures, and abstractions of conventional operating systems are not well suited to highly parallel machines.
We describe the Symunix operating system for the NYU Ultracomputer, a machine with hardware support for Fetch&Phi operations and combining of memory references. Avoidance of serial bottlenecks, through careful design of interfaces and use of highly parallel algorithms and data structures, is the primary goal of the system. Support for flexible parallel programming models relies on user-mode code to implement common abstractions such as threads and shared address spaces. Close interaction between the kernel and user-mode runtime layer reduces the cost of synchronization and avoids many scheduling anomalies.
Symunix first became operational in 1985 and has been extensively used for research and instruction on two generations of Ultracomputer prototypes at NYU. We present data gathered from actual multiprocessor execution to support our claim of practicality for future large systems.
-
Ph.D. Thesis
1995
Dreme: for Life in the Net
Fuchs, Matthew
Abstract
|
PDF
Title: Dreme: for Life in the Net
Candidate: Fuchs, Matthew
Advisor(s): Perlin, Ken
Abstract:
This dissertation makes four contributions towards supporting distributed, multi-user applications over open networks.
Dreme, a distributed dialect of the Scheme language in which all first-class language objects are mobile in the network. In particular, various distributed topologies, such as client/server and peer-to-peer, can be created by migrating closures with overlapping scopes around the network, correct inter-process communication being assured by Scheme's lexical scoping rules and network wide addressing. Threads of control are passed around through first-class distributed continuations.
A User Interface toolkit for coordinating events in multi-threaded, multi-user applications by organizing continuation callbacks into nested lexical scopes. Each event has certain attributes, such as synchronous/asynchronous. Certain events create new scopes with new events. Continuation callbacks allow both synchronous events which return values to their callers, and asynchronous ones. Application needn't be spread throughout the application, as with applications using an event-loop.
A distributed garbage collection algorithm that collects all cycles on an open network. The basic algorithm depends on maintaining the inverse reference graph (IRG) among network nodes (i.e., if a->b is in the regular graph, b->a is in the IRG). A single IRG traversal from any object determines the status of each object touched. Communication is decentralized (any object can choose to determine its status), garbage is touched O(1) times (in the absence of failures), it is fault-tolerant, and can handle malicious or faulty neighbors. Each operation uses messages linear in the size of the IRG. Overlapping operations perform like parallel quick sort.
An approach to using the Standard Generalized Markup Language (SGML) over the network to support distributed GUIs, intelligent clients, and mobile agents. SGML is a meta-grammar for creating domain specific document markup languages to which a variety of semantics (display, reading/writing databases, etc.) can be applied. The document, its grammar, and some semantics, are retrieved over the network. Applications normally create interfaces directly out of graphic objects to communicate with the user. However, if the interface has some semantics (and is parsable), a computational agent can interpret the interface and talk directly to the application on behalf of the human.
-
Ph.D. Thesis
1995
Fault-tolerant Parallel Processing Combining Linda, Checkpointing, and Transactions
Jeong, Karpjoo
Abstract
|
PDF
Title: Fault-tolerant Parallel Processing Combining Linda, Checkpointing, and Transactions
Candidate: Jeong, Karpjoo
Advisor(s): Shasha, Dennis
Abstract:
With the advent of high performance workstations and fast LANs, networks of workstations have recently emerged as a promising computing platform for long-running coarse grain parallel applications. Their advantages are wide availability and cost-effectiveness, as compared to massively parallel computers. Long-running computation in the workstation environment, however, requires both fault tolerance and the effective utilization of idle workstations.
In this dissertation, we present a variant of Linda, called Persistent Linda (PLinda), that treats these two issues uniformly: specifically, PLinda treats non-idleness as failure.
PLinda provides a combination of checkpointing and transaction support on both data and program state (an encoding of continuations). The traditional transaction model is simplified and then extended to support robust parallel computation. Treatable failures include processor and main memory hard and slowdown failures, and network omission and corruption failures.
The programmer can customize fault tolerance when constructing an application, trading failure-free performance against recovery time. When creating a PLinda program, the programmer can decide on the frequency of transactions and the encoding of continuations to be saved upon transaction commit. At runtime, the programmer can decide to suppress certain continuations for better failure-free performance.
PLinda has been applied to corporate bond index statistics computation and biological pattern recognition.
-
TR1995-692
1995
A Knowledge Representation Based on the Belnap's Four-Valued Logic
Kaluzhny, Y.;
Muravitsky, A.
Abstract
|
PDF
Title: A Knowledge Representation Based on the Belnap's Four-Valued Logic
Author(s): Kaluzhny, Y.; Muravitsky, A.
Abstract:
We treat knowledge from a computer-oriented point of view, considering it as a kind of data type. An axiomatic approach to the notion of data type undertaken by Dana Scott in [D.S.Scott, Outline of a Mathematical Theory of Computation, in: Proceedings of Princeton Conference on Information Science, 1971, pp. 169--176], is explored to find entities suitable for representation techniques. At the same time, we stay in Belnap's paradigm of the toleration of inconsistency. We propose a representation of knowledge (possible with contradictions) in simple propositional language, and we show how such knowledge can be maintained and how it should be transformed on receipt of new information. In this transformation, the key role is played by Scott's continuity rather than consistency.
-
TR1995-695
1995
Dirichlet Problem for the Schrodinger Operator in a Half-space with Boundary Data of Arbitrary Growth at Infinity
Kheyfits, A.
Abstract
|
PDF
Title: Dirichlet Problem for the Schrodinger Operator in a Half-space with Boundary Data of Arbitrary Growth at Infinity
Author(s): Kheyfits, A.
Abstract:
We consider the Dirichlet problem for the Schrodinger operator in a half-space with boundary data having an arbitrary growth at infinity. A solution is constructed as the generalized Poisson integral. Uniqueness of the solution is investigated too.
-
TR1995-683
1995
An Optimal Preconditioner for a Class of Saddle Point Problems with a Penalty Term, Part II: General Theory
Klawonn, A.
Abstract
|
PDF
Title: An Optimal Preconditioner for a Class of Saddle Point Problems with a Penalty Term, Part II: General Theory
Author(s): Klawonn, A.
Abstract:
Iterative methods are considered for saddle point problems with penalty term. A positive definite preconditioner is constructed and it is proved that the condition number of the preconditioned system can be made independent of the discretization and the penalty parameters. Examples include the pure displacement problem in linear elasticity, the Timoshenko beam, and the Mindlin-Reissner plate.
Key words: Saddle point problems, penalty term, nearly incompressible materials, Timoshenko, Mindlin-Reissner, preconditioned conjugate residual method, multilevel, domain decomposition.
Please note: This report is a revised version of tr676.
-
TR1995-699
1995
Run-time versus Compile-time Instruction Scheduling in Superscalar (RISC) Processors: Performance and Tradeoffs
Leung, A.;
Palem, K.; Ungureanu, C.
Abstract
|
PDF
Title: Run-time versus Compile-time Instruction Scheduling in Superscalar (RISC) Processors: Performance and Tradeoffs
Author(s): Leung, A.; Palem, K.; Ungureanu, C.
Abstract:
The RISC revolution has spurred the development of processors with increasing levels of instruction level parallelism (ILP). In order to realize the full potential of these processors, multiple instructions must be issued and executed in a single cycle. Consequently, instruction scheduling plays a crucial role as an optimization in this context. While early attempts at instruction scheduling were limited to compile-time approaches, the recent trend is to provide dynamic support in hardware. In this paper, we present the results of a detailed comparative study of the performance advantages to be derived by the spectrum of instruction scheduling approaches: from limited basic-block schedulers in the compiler, to novel and aggressive run-time schedulers in hardware. A significant portion of our experimental study via simulations, is devoted to understanding the performance advantages of run-time scheduling. Our results indicate it to be effective in extracting the ILP inherent to the program trace being scheduled, over a wide range of machine and program parameters. Furthermore, we also show that this effectiveness can be further enhanced by a simple basic-block scheduler in the compiler, which optimizes for the presence of the run-time scheduler in the target; current basic-block schedulers are not designed to take advantage of this feature. We demonstrate this fact by presenting a novel enhanced basic-block scheduler in this paper. Finally, we outline a simple analytical characterization of the performance advantage, that run-time schedulers have to offer.
Key words: Compile-time Optimizations, Dynamic Schedulers, Instruction Scheduling, Program Traces, Scope, Superscalar Processors
-
Ph.D. Thesis
1995
A Model-Based 3-D Object Recognition System Using Geometric Hashing with Attributed Features
Liu, Jyhjong
Abstract
|
PDF
Title: A Model-Based 3-D Object Recognition System Using Geometric Hashing with Attributed Features
Candidate: Liu, Jyhjong
Advisor(s): Hummel, Robert
Abstract:
We build an object recognition system that is able to recognize 3-D objects such as vehicles embedded in highly complicated backgrounds. We use the geometric hashing method, augmenting the approach through the use of attributed features , k -d trees for access to features, and the use of bounds in order to limit the search.
We make use of expressive features to improve the performance of a geometric hashing object recognition system. Various kinds of attributed features, such as the midpoint of a line segment with its orientation, the endpoints of a line segment with its orientation, and the center and the circle features are extracted and used in our system.
The number of features as well as the type of features in each model can vary. We make use of weighted voting, which has a Bayesian interpretation. The distribution of the invariants for various features as well as the bounds of the weighted voting formula are analyzed. In order to improve the performance of the system, we use a k-d tree to search entries in high-dimensional hash tables. The method is generalized in order to treat variables taking on values from a non-interval domain, such as data measuring angles. To make use of available computer resources, we distribute the computation, assigning evidence accumulation for a single hypothesis to one processor in a multiple processor and multiple workstation environment. The implementation reduces the communication overhead to minimum. The system is implemented using the Khoros software development system.
The results of target recognition are reported in numerous experiments. The experiments show that the use of more expressive features improves the performance of the recognition system.
-
TR1995-713
1995
Computational Real Algebraic Geometry
Mishra, B.
Abstract
|
PDF
Title: Computational Real Algebraic Geometry
Author(s): Mishra, B.
Abstract:
Computational real algebraic geometry studies various algorithmic questions dealing with the real solutions of a system of equalities, inequalities, and inequations of polynomials over the real numbers. This emerging field is largely motivated by the power and elegance with which it solves a broad and general class of problems arising in robotics, vision, computer aided design, geometric theorem proving, etc.
The following survey paper discusses the underlying concepts, algorithms and a series of representative applications. This paper will appear as a chapter in the "Handbook of Discrete and Computational Geometry" (Edited by J.E. Goodman and J. O'Rourke), CRC Series in Discrete and Combinatorial Mathematics.
-
TR1995-680
1995
Grasp Metrics: Optimality and Complexity
Mishra, B.
Abstract
|
PDF
Title: Grasp Metrics: Optimality and Complexity
Author(s): Mishra, B.
Abstract:
In this paper, we discuss and compare various metrics for goodness of a grasp. We study the relations and trade-offs among the goodness of a grasp, geometry of the grasped object, number of fingers and the computational complexity of the grasp-synthesis algorithms. The results here employ the techniques from convexity theory first introduced by the author and his colleagues.
-
TR1995-709
1995
Three Finger Optimal Planar Grasp
Mishra, B.;
Teichman, M.
Abstract
|
PDF
Title: Three Finger Optimal Planar Grasp
Author(s): Mishra, B.; Teichman, M.
Abstract:
In this paper, we study various algorithmic questions regarding the computation of an optimal three finger planar grasp. We present a novel O(n^2 log n)-time algorithm to compute such an optimal grasp for an arbitrary simple n-gon. This algorithm can be used for finding ``good'' immobilizing sets. We also discuss several variations on the problem and many intriguing open questions in the area that remain unsolved.
-
TR1995-710
1995
On the Lidskii-Vishik-Lyusternik Perturbation Theory for Eigenvalues of Matrices with Arbitrary Jordan Structure
Moro, J.;
Burke, J.V.; Overton, M.L.
Abstract
|
PDF
Title: On the Lidskii-Vishik-Lyusternik Perturbation Theory for Eigenvalues of Matrices with Arbitrary Jordan Structure
Author(s): Moro, J.; Burke, J.V.; Overton, M.L.
Abstract:
Let $A$ be a complex matrix with arbitrary Jordan structure, and $\lambda$ an eigenvalue of $A$ whose largest Jordan block has size $n$. We review previous results due to Lidskii, showing that the splitting of $\lambda$ under a small perturbation of $A$ of order $\epsilon$, is, generically, of order $\epsilon^{1/n}$. Explicit formulas for the leading coefficients are obtained, involving the perturbation matrix and the eigenvectors of $A$. We also present an alternative proof of Lidskii's main theorem, based on the use of the Newton diagram. This approach clarifies certain difficulties which arise in the nongeneric case, and leads, in some situations, to the extension of Lidskii's results. These results suggest a new notion of Holder condition number for multiple eigenvalues, depending only on the conditioning of the associated eigenvectors, not the conditioning of the Jordan vectors.
-
TR1995-698
1995
Combined Instruction Scheduling and Register Allocation
Motwani, R.;
Palem, K.; Sarkar, V.; Reyen, S.
Abstract
|
PDF
Title: Combined Instruction Scheduling and Register Allocation
Author(s): Motwani, R.; Palem, K.; Sarkar, V.; Reyen, S.
Abstract:
In this paper, we describe a novel framework for expressing an optimization problem that simultaneously addresses instruction scheduling and register allocation, referred to as CRISP. By modeling spill-costs directly in the scheduling problem, CRISP permits the design of algorithms whose objective is to exploit the available instruction level parallelism --- the traditional goal of instruction scheduling --- while lowering the cost of register spilling at the same time. Currently, these optimizations are performed in separate phases and interact in ways that are not characterized very well, leading to phase-ordering problems. We also develop a fast heuristic in this paper for solving this combined optimization in the context of basic-blocks; our algorithm runs in time O ( E N) where the basic block of N instructions has E edges; this time includes all preprocessing costs. In comparison to conventional phase-ordered approaches, our combined heuristic performed well in experimental evaluations on basic-blocks with sizes in the range 5 to 100. We also present rigorous characterizations of the inherent difficulty of solving optimization problems in our CRISP framework, as well as in classical frameworks. A surprising outcome of this work is that problems expressed in CRISP are provably easier to solve, than say graph coloring --- graph coloring is the classical approach to expressing just one of the two phases of interest, namely register allocation. By eliminating the phase-ordering problem, CRISP lowers the overall complexity of the software engineering effort involved. This is because optimizers designed based on our unified approach will be relatively ``lightweight,'' when compared to those that have to cope with phase-ordering. This has positive implications both for the duration of the design cycles, as well as the concomitant costs of designing low-level optimizations in modern compilers.
-
TR1995-690
1995
A Framework for Knowledge-Based Systems
Muravitsky, A.
Abstract
|
PDF
Title: A Framework for Knowledge-Based Systems
Author(s): Muravitsky, A.
Abstract:
The paper continues the theme of [ knowledge.ps ]. We differentiate between our approach to knowledge representation and that of others by expressing the following Working Hypothesis: Knowledge is a data type, and knowledge revision is accomplished by continuous operations on it, which are coordinated with its effective basis. Staying in the limits of Belnap's paradigm of the admittance of contradictory information into the computer's memory, our purpose in this paper is to reduce as much as possible all the computable processes needed for modifing the current state of the computer's knowledge and describe conditions for possible maneuvering. In particular, we solve some problems of decidability concerning operations on the minimal states, which are regarded as natural knowledge transformers. We show, also, how to express those operations in lattice theory terms that leads to the simplification of their computation on the lattice of minimal states. The problem of backtracking in the presented context is considered as well.
-
TR1995-693
1995
A Perspective of New Foundations for Knowledge Maintenance Systems: Research Program
Muravitsky, A.
Abstract
|
PDF
Title: A Perspective of New Foundations for Knowledge Maintenance Systems: Research Program
Author(s): Muravitsky, A.
Abstract:
We propose to provide new mathematical foundations for the design of knowledge-based systems. The underlying idea is that the knowledge which the computer (``artificial agent'') operates with is considered as a kind of abstract data type. In this context, a relation of approximation arises in a natural way when one imagines the computer as operating in a changing information environment (``information flow''). This notion of pproximation can be studied using the techniques that have been developed for domain theory in the context of denotational semantics of programming languages.
-
TR1995-689
1995
Knowledge Representation as Domains
Muravitsky, A.
Abstract
|
PDF
Title: Knowledge Representation as Domains
Author(s): Muravitsky, A.
Abstract:
This is a continuing attempt in a series of papers [ knowledge.ps, inform.ps, frame.ps ] to show how computer-represented knowledge can be arranged as elements of an effectively represented semantic domain in the sense of [C.A.Gunter and D.S.Scott, Semantic Domains, in: J. van Leeuwen (ed.), Handbook of Theoretical Computer Science, Vol. B, pp. 635--674]. We present a direct deductive description of the domain, which was defined semantically in [ knowledge.ps ], via the Scott's notion of information system. Also, the internal structure of the continuous ampliative operations coordinated with the domain's effective basis is established. Though we always remain in the paradigm of the toleration of contradictory information described in [N.D.Belnap, A Useful Four-Valued Logic: How a Computer Should Think, in: A.R.Anderson, N.D.Belnap, and J.M.Dunn, Entailment: the Logic of Relevance and Necessity, Vol. 2, Princeton Univ. Press, 1992], the approach in question could be extended to include domains for consistency knowledge bases.
-
TR1995-694
1995
Logic of Information Knowledge
Muravitsky, A.
Abstract
|
PDF
Title: Logic of Information Knowledge
Author(s): Muravitsky, A.
Abstract:
We share with some philosophers the view that a state of knowledge, being a part of the real world, can bring contradiction into it. Such an ontological reading of knowledge is very important when one deals with information knowledge, which arises as the content of the computer's memory when the computer is placed into changeable information environment ("information flow"), and "must" be able to tolerate any (not excluding contradictions) from the computer's users. Continuing research begun in [KM 93], we consider in length one kind of Scott-continuous operations introduced there. Each such operation [A->B](x), where A and B are formulas in a propositional language, called a rule, moves the computer to a "minimal" state of knowledge, in which B is true, if in a current state A is true. Note that the notion of rule is used here in an information-transforming sense, rather than in the ordinary truth-sound sense. We distinguish between global and local rules and show that these notions are decidable. Also, we define a modal epistemic logic as a tool for the prediction of possible evolution of the system's knowledge and establish decidability of this logic.
-
TR1995-688
1995
On the First Degree Entailment of Two 3-Valued Logics
Muravitsky, A.
Abstract
|
PDF
Title: On the First Degree Entailment of Two 3-Valued Logics
Author(s): Muravitsky, A.
Abstract:
We note first that the first degree entailment of {\L}ukasiewicz's 3-valued logic and a 3-valued logic that is extracted of Belnap's 4-valued logic is the same. Then, we give an axiomatization of that entailment as the calculus E_{fde} + A &-A- > B\/-B, where E_{fde} is the first degree entailment of Anderson-Belnap's logic E of relevance and necessity.
-
TR1995-691
1995
Some Knowledge Transformers: Infons and Constraints
Muravitsky, A.
Abstract
|
PDF
Title: Some Knowledge Transformers: Infons and Constraints
Author(s): Muravitsky, A.
Abstract:
The goal of this paper is twofold. First, it is to present a general scheme within which information is supposed to turn into the computer-represented knowledge and, second, to define two natural kinds of transfomers of this knowledge which this scheme thrusts us into considering.
-
TR1995-697
1995
New Mathematical Foundations for Knowledge Maintenance Systems: Research Program
Muravitsky, Alexei Yu.
Abstract
|
PDF
Title: New Mathematical Foundations for Knowledge Maintenance Systems: Research Program
Author(s): Muravitsky, Alexei Yu.
Abstract:
We propose to provide new mathematical foundations for the design of knowledge-based systems. The underlying idea is that the knowledge which the computer ("artificial agent") operates with is considered as a kind of abstract data type. In this context, a relation of approximation arises in a natural way when one imagines the computer as operating in a changing information environment ("information flow"). This notion of approximation can be studied using the techniques that have been developed for domain theory in the context of denotational semantics of programming languages.
-
TR1995-696
1995
Some Knowledge Transformers: Infons and Constraints
Muravitsky, Alexei Yu.
Abstract
|
PDF
Title: Some Knowledge Transformers: Infons and Constraints
Author(s): Muravitsky, Alexei Yu.
Abstract:
The goal of this paper is twofold. First, it is to present a general scheme within which information is supposed to turn into the computer-represented knowledge and, second, to define two natural kinds of transfomers of this knowledge which this scheme thrusts us into considering.
-
TR1995-686
1995
Double Hashing is Computable and Randomizable with Universal Hash Functions
Schmidt, J.;
Siegel, A.
Abstract
|
PDF
Title: Double Hashing is Computable and Randomizable with Universal Hash Functions
Author(s): Schmidt, J.; Siegel, A.
Abstract:
Universal hash functions that exhibit (c log n)-wise independence are shown to give a performance in double hashing and virtually any reasonable generalization of double hashing that has an expected probe count of 1/(1-alpha) + epsilon for the insertion of the (alpha n)-th item into a table of size n, for any fixed alpha < 1 and epsilon > 0. This performance is within epsilon of optimal. These results are derived from a novel formulation that overestimates the expected probe count by underestimating the presence of partial items already inserted into the hash table, and from a sharp analysis of the underlying stochastic structures formed by colliding items.
-
TR1995-708
1995
An API for Choreographing Data Accesses
Shriver, E.A.M.;
Wisniewski, L.F.
Abstract
|
PDF
Title: An API for Choreographing Data Accesses
Author(s): Shriver, E.A.M.; Wisniewski, L.F.
Abstract:
Current APIs for multiprocessor multi-disk file systems are not easy to use in developing out-of-core algorithms that choreograph parallel data accesses. Consequently, the efficiency of these algorithms is hard to achieve in practice. We address this deficiency by specifying an API that includes data-access primitives for data choreography. With our API, the programmer can easily access specific blocks from each disk in a single operation, thereby fully utilizing the parallelism of the underlying storage system.
Our API supports the development of libraries of commonly-used higher-level routines such as matrix-matrix addition, matrix-matrix multiplication, and BMMC (bit-matrix-multiply/complement) permutations. We illustrate our API in implementations of these three high-level routines to demonstrate how easy it is to use.
-
TR1995-687
1995
Closed Hashing is Computable and Optimally Randomizable with Universal Hash Functions
Siegel, A.;
Schmidt, J.
Abstract
|
PDF
Title: Closed Hashing is Computable and Optimally Randomizable with Universal Hash Functions
Author(s): Siegel, A.; Schmidt, J.
Abstract:
-
TR1995-684
1995
On Universal Classes of Extremely Random Constant Time Hash Functions and their Time-space Tradeoff
Siegel, A.
Abstract
|
PDF
Title: On Universal Classes of Extremely Random Constant Time Hash Functions and their Time-space Tradeoff
Author(s): Siegel, A.
Abstract:
A family of functions F that map [0,n]->[0,n], is said to be h-wise independent if any h points in [0,n] have an image, for randomly selected f in F, that is uniformly distributed. This paper gives both probabilistic and explicit randomized constructions of (n**epsilon)-wise independent functions, for epsilon < 1, that can be evaluated in constant time for the standard random access model of computation. Simple extensions give comparable behavior for larger domains. As a consequence, many probabilistic algorithms can for the first time be shown to achieve their expected asymptotic performance for a feasible model of computation.
This paper also establishes a tight tradeoff in the number of random seeds that must be precomputed for a random function that runs in time T and is h-wise independent.
-
TR1995-685
1995
Toward a Usable Theory of Chernoff Bounds for Heterogeneous and Partially Dependent Random Variables
Siegel, A.
Abstract
|
PDF
Title: Toward a Usable Theory of Chernoff Bounds for Heterogeneous and Partially Dependent Random Variables
Author(s): Siegel, A.
Abstract:
Let X be a sum of real valued random variables and have a bounded mean E[X]. The generic Chernoff-Hoeffding estimate for large deviations of X is: P{X-E[X]>=a}<=min_{y>=0}exp(-y(a+E[X]))E[exp(y X)], which applies with a>=0 to random variables with very small tails. At issue is how to use this method to attain sharp and useful estimates. We present a number of Chernoff-Hoeffding bounds for sums of random variables that may have a variety of dependent relationships and that may be heterogeneously distributed.
-
Ph.D. Thesis
1995
Grasping and Fixturing: a Geometric Study and an Implementation
Teichmann, Marek
Abstract
|
PDF
Title: Grasping and Fixturing: a Geometric Study and an Implementation
Candidate: Teichmann, Marek
Advisor(s): Mishra, Bud
Abstract:
The problem of immobilizing an object by placing ``fingers'' (or points) on its boundary occurs in the field of dexterous manipulation, manufacturing and geometry. In this dissertation, we consider the purely static problems of good grasp and fixture set synthesis, and explore their connection to problems in computational and combinatorial geometry. Two efficient randomized approximation algorithms are proposed for finding the smallest cover for a given convex set and for finding the largest magnitude by which a convex set can be scaled and still be covered by a cover of a given size. They generalize an algorithm by Clarkson. The cover points are selected from a set of n points. The following bounds are valid for both types of problems. For the former, c is the size of the optimal cover, and for the latter, c is the desired cover size. In both cases, a cover of size
$4 cd \lg c$
is returned.The running time depends on the set to be covered. Covering an n -vertex polytope in
$R^d$
takes$O(c^2 n \log n \log c)$
expected time, and covering a ball takes$O(nc^{1+\delta}+c^{\lfloor{d/2}\rfloor+1}\log n\log^{\lfloor{d/2}\rfloor} c)$
expected time. These algorithms have applications to finding a good grasp or fixture set. An
$O(n^2 \log n)$
algorithm for finding optimal 3 finger grasps for n sided polygons is also given.We also introduce a new grasp efficiency measure based on a certain class of ellipsoids, invariant under rigid motions of the object coordinate system. To our knowledge, this is the first measure having this property. We also introduce a new reactive grasping paradigm which does not require a priori knowledge of the object. This paradigm leads to several reactive algorithms for finding a grasp for parallel jaw grippers and three finger robot hands equipped with simple sensors. We show their correctness and discuss our implementation of one such algorithm: a parallel jaw gripper with light-beam sensors which we have built. A short video demonstration will also be shown.
-
TR1995-679
1995
Report on NSF Workshop on Manufacturing and Computational Geometry
Yap, C.
Abstract
|
PDF
Title: Report on NSF Workshop on Manufacturing and Computational Geometry
Author(s): Yap, C.
Abstract:
This is a summary of the NSF Workshop on Manufacturing and Computational Geometry, held at the Courant Institute of Mathematical Sciences, New York University, on April 1-2, 1994. The meeting brought together about 30 participants from both the manufacturing and the computational geometry communities for the purposes of discussing current trends in the two communities, identifying areas of mutual interest, and proposing future joint activities.
-
TR1994-659
1994
A New Primal-Dual Interior-Point Method for Semidefinite Programming
Alizadeh, F.;
Haeberly, J. A.; Overton, M.
Abstract
|
PDF
Title: A New Primal-Dual Interior-Point Method for Semidefinite Programming
Author(s): Alizadeh, F.; Haeberly, J. A.; Overton, M.
Abstract:
Semidefinite programming (SDP) is a convex optimization problem in the space of symmetric matrices. Primal-dual interiorpoint methods for SDP are discussed. These generate primal and dual matrices X and Z which commute only in the limit. A new method is proposed which iterates in the space of commuting matrices.
-
TR1994-674
1994
Automatic Synthesis Algorithms for Supervisory Controllers (Preliminary Report)
Antoniotti, M.;
Mishra, B.
Abstract
|
PDF
Title: Automatic Synthesis Algorithms for Supervisory Controllers (Preliminary Report)
Author(s): Antoniotti, M.; Mishra, B.
Abstract:
In this paper we describe our experience with a prototype system capable of synthesizing "Supervisor Controller Programs" based largely on the theory of discrete event systems (DES) first proposed by Ramadge and Wonham. We augment the theory by also allowing continuous time trajectories modeling transitions between events. We illustrate our approach by an example, - the discrete control of a walking machine - which poses some challenges on the applicability of the theory and finally, discuss some possible solutions.
Notes: Appeared in IEEE Proceedings of the Fourth International Conference on Computer Integrated Manufacturing and Automation Technology, Troy, NY, Oct. 1994
-
TR1994-675
1994
Discrete Event Models + Temporal Logic = Supervisory Controller: Automatic Synthesis of Locomotion Controllers
Antoniotti, M.;
Mishra, B.
Abstract
|
PDF
Title: Discrete Event Models + Temporal Logic = Supervisory Controller: Automatic Synthesis of Locomotion Controllers
Author(s): Antoniotti, M.; Mishra, B.
Abstract:
In this paper, we address the problem of the synthesis of controller programs for a variety of robotics and manufacturing tasks. The problem we choose for test and illustrative purposes is the standard ``Walking Machine Problem,'' a representative instance of a real "hybrid" problem with both logical/discrete and continuous properties and strong mutual influence without any reasonable separation. We aim to produce a ``compiler technology'' for this class of problems in a manner analogous to the development of the so-called ``Silicon Compilers'' for the VLSI technology. To cope with the difficulties inherent to the problem, we resort to a novel approach that combines many key ideas from a variety of disciplines: namely, ``Discrete Event Supervisory Systems'', Petri Nets approaches and ``Temporal Logic''.
Notes: Will appear in the 1995 IEEE International Conference on Robotics and Automation, Nagoya, Japan
-
TR1994-654
1994
Multilevel Schwarz Methods with Partial Refinement
Chen, H.
Abstract
|
PDF
Title: Multilevel Schwarz Methods with Partial Refinement
Author(s): Chen, H.
Abstract:
We consider multilevel additive Schwarz methods with partial refinement. These algorithms are generalizations of the multilevel additive Schwarz methods developed by Dryja and Widlund and many others. We will give two different proofs by using quasi-interpolants under two different assumptions on selected refinement subregions to show that this class of methods has an optimal condition number. The first proof is based purely on the localization property of quasi-interpolants. However, the second proof use some results on iterative refinement methods. As a by-product, the multiplicative versions which corresponds to the FAC algorithms with inexact solvers consisting of one Gauss-Seidel or damped Jacobi iteration have optimal rates of convergence. Finally, some numerical results are presented for these methods.
-
TR1994-670
1994
Approximate Euclidean Shortest Path in 3-Space
Choi, J.;
Sellen, J.; Yap, C.K.
Abstract
|
PDF
Title: Approximate Euclidean Shortest Path in 3-Space
Author(s): Choi, J.; Sellen, J.; Yap, C.K.
Abstract:
Papadimitriou's approximation approach to the Euclidean shortest path (ESP) in 3-space is revisited. As this problem is NPhard, his approach represents an important step towards practical algorithms. However, there are several gaps in the original description. Besides giving a complete treatment in the framework of bit complexity, we also improve on his subdivision method. Among the tools needed are root-separation bounds and non-trivial applications of Brent's complexity bounds on evaluation of elementary functions using floating point numbers.
-
TR1994-666
1994
Branching Continuous Time and the Semantics of Continuous Action
Davis, E.
Abstract
|
PDF
Title: Branching Continuous Time and the Semantics of Continuous Action
Author(s): Davis, E.
Abstract:
It is often useful to model the behavior of an autonomous intelligent creature in terms of continuous control and choice. For example, a robot who moves through space can be idealized as able to execute any continuous motion, subject to constraints on velocity and acceleration; in such a model, the robot can "choose" at any instant to change his acceleration. We show how such models can be described using a continuous branching time structure. We discuss mathematical foundations of continuous branching structures, theories of continuous action in physical worlds, embedding of discrete theories of action in a continuous structure, and physical and epistemic feasibility of plans with continuous action.
-
TR1994-657
1994
Adaptive Time-Frequency Approximations with Matching Pursuits
Davis, G.;
Mallat, S.; Zhang, Z.
Abstract
|
PDF
Title: Adaptive Time-Frequency Approximations with Matching Pursuits
Author(s): Davis, G.; Mallat, S.; Zhang, Z.
Abstract:
Computing the optimal expansion of a signal in a redundant dictionary of waveforms is an NP-complete problem. We introduce a greedy algorithm called a matching pursuit which computes a sub-optimal expansion. The dictionary waveforms which best match a signal's structures are chosen iteratively. An orthogonalized version of the matching pursuit is also developed. Matching pursuits are general procedures for computing adaptive signal representations. With a dictionary of Gabor functions, a matching pursuit defines an adaptive time-frequency transform. We derive a signal energy distribution in the time-frequency plane which does not contain interference terms, unlike the Wigner and Cohen class distributions. A matching pursuit is a chaotic map whose asymptotic properties are studied. We describe an algorithm which isolates the coherent structures of a signal and show an application to pattern extraction from noisy signals.
-
Ph.D. Thesis
1994
Systolic Combining Switch Designs
Dickey, Susan
Abstract
|
PDF
Title: Systolic Combining Switch Designs
Candidate: Dickey, Susan
Advisor(s): Gottlieb, Allan
Abstract:
High-performance VLSI switches are needed in the interconnection network of massively parallel shared memory multiprocessors. The switch designs we consider alleviate the ``hot spot'' problem by adding extra logic to the switches to combine conventional loads and stores as well as fetch-and-
$\phi$
operations destined for the same memory location. The performance of three buffered switch architectures was investigated through probabilistic analysis and simulation: Type A switches, with k queues, one at each output, each accepting k inputs per cycle; and two one-input queue designs, Type B switches, with$k^2$
output queues, and Type C switches, with k input queues. While the Type C switch is less expensive, Type A and B have considerably better performance. An efficient CMOS implementation for systolic queue designs was devised. A non-combining switch containing these systolic queues was fabricated through MOSIS in 3 micron CMOS and employed the NORA clocking methodology, using qualified clocks for distributing global control.A combining switch was fabricated in 2 micron CMOS for use in the 16 by 16 processor/memory interconnection network of the NYU Ultracomputer prototype. Details are given about the internal logic of the two component types used in the network. A design usable in networks of size up to 256 * 256 has been prepared for fabrication by NCR at a smaller feature size in a higher pincount package. Differences in the logic partitioning of the two designs are described. We describe the performance of these designs for systems of up to 1024 PEs obtained through simulation. Our experience in implementing a combining switch indicates that the cost of hardware combining is much less than is widely believed. We compare the cost of a combining switch to that of a non-combining switch and discuss the scalability of the implemented design to large numbers of processors. Differences in the capabilities of combining switch architectures are studied. We describe the implementation of ``two-and-a-half-way'' combining, which promises to avoid network saturation in large networks at only slightly greater cost than two-way combining. We also discuss implementation alternatives and performance for a 4 by 4 combining switch.
- TR1994-662 1994 Multilevel Schwarz Methods for Elliptic Problems with Discontinuous Coefficients in Three Dimensions Dryja, M.; Sarkis, M.; Widlund, O. Abstract | PDF
-
TR1994-668
1994
A Direct-Drive Hand: Design, Modeling and Control
Ebner, M.;
Wallace, R.
Abstract
|
PDF
Title: A Direct-Drive Hand: Design, Modeling and Control
Author(s): Ebner, M.; Wallace, R.
Abstract:
An artificial 15 degrees of mobility direct drive hand, slightly bigger than a human hand, is presented. The underlying technology are the miniature direct drive actuators recently developed. The motivation for our design and the construction plan for the hand is given. The dynamics of the hand are analyzed theoretically and a model for control of the hand is presented. Finally we describe our experiences made while experimenting with the hand. A direct drive hand graphics interface has been developed to simulate the dynamics of the hand and to test out control algorithms for the hand.
-
Ph.D. Thesis
1994
Gedanken: A tool for Pondering the Tractability of Correct Program Technology
Ericson, Lars
Abstract
|
PDF
Title: Gedanken: A tool for Pondering the Tractability of Correct Program Technology
Candidate: Ericson, Lars
Advisor(s): Mishra, Bud
Abstract:
We examine the feasibility of the Correct Program Technology (CPT) approach to program verification using available technology, with pessimistic results.
We compare CPT with RAPTS and the Calculus of Constructions. We specify the Correct Programmer's Workbench (CPW), and review six programming environments as platforms. We define a Correct Program Editor and prototype it in Mathematica.
CPT applies decision procedures for specification sublanguages to make shorter proofs, hoping these shorter proofs will have faster verifications, but these sublanguages are NP-Complete or worse. We review some heuristics for improving their average case. CPT relies on a sublanguage of set theory, MLS. We prove that MLS is NP-average complete in the sense of the Levin-Gurevich theory of average case complexity. We conjecture that shorter proofs of random theorems cost more to verify.
EMLS is an elementary relational language (ERL). We define syntactic simplification rule sets (SSRs) for ERLs. The average case effect of an SSR is determined by the number of matches of the SSR with ERL sentences of n individual variables. EMLS sentences over n variables can be constructed from sentences in L 4,2, n and L 2,3, n , where L k , m , n is the language of k relations of m arguments over n variables. We recursively define a match-counting algorithm for L k , m , n SSRs and extend it to EMLS. If an SSR has p patterns in w pattern variables over n individual variables, match counting costs O ( p n w 2 pn w - 1 (2 + k n m )). Match counting for L k ,0,0 is in #P, and we conjecture that it is # P -Complete. We conjecture generating functions do not yield a method of approximating the number of matches, and we conjecture that the problem of approximating matches is also # P -Complete. We count the matches for low n for some EMLS SSRs, with discouraging results, and note that the matches of an effective rule set must grow as the size of the language for n variables.
We conclude that the remaining hope for verification is to build a large library of specification language constructs which occur frequently and can be verified in polynomial time.
-
TR1994-660
1994
Optimizing Eigenvalues of Symmetric Definite Pencils
Haeberly, J. A.;
Overton, M.
Abstract
|
PDF
Title: Optimizing Eigenvalues of Symmetric Definite Pencils
Author(s): Haeberly, J. A.; Overton, M.
Abstract:
We consider the following quasiconvex optimization problem: minimize the largest eigenvalue of a symmetric definite matrix pencil depending on parameters. A new form of optimality conditions is given, emphasizing a complementarity condition on primal and dual matrices. Newton's method is then applied to these conditions to give a new quadratically convergent interior-point method which works well in practice. The algorithm is closely related to primal-dual interior-point methods for semidefinite programming.
-
Ph.D. Thesis
1994
Designing Pattern Matching Algorithms by Exploiting Structural Pattern Properties
Hariharan, Ramesh
Abstract
|
PDF
Title: Designing Pattern Matching Algorithms by Exploiting Structural Pattern Properties
Candidate: Hariharan, Ramesh
Advisor(s): Cole, Richard
Abstract:
Exact Complexity of String Matching: We consider the question of how many character comparisons are needed to find all occurrences of a pattern string of length m in a text string of length n . We show an almost tight upper bound of the form n + O ( n / m ) character comparisons, following preprocessing. Specifically, we show an upper bound of n + 8/(3( m +1)) ( n - m ) character comparisons. The following lower bounds are also shown: for on-line algorithms, a bound of n + 9 / (4( m +1)) ( n - m ) character comparisons for m =35+36 k , for any integer k >= 1, and for general algorithms, a bound of n +2( n - m ) / ( m +3) character comparisons, for m =2 k +1, for any integer k >= 1.
Parallel Two-Dimensional Pattern Matching: We give the first time, space and work optimal common CRCW-PRAM algorithm for finding all occurrences of a two-dimensional pattern of size m 1 * m 2 in a two-dimensional text of size n 1 * n 2 . Our algorithm runs in O (1) time performing O ( n 1 * n 2 ) work, following preprocessing of the pattern. A major portion of the preprocessing step is the computation of witnesses for the pattern. We show how to compute witnesses for the pattern in O ( log log m 2 )time and O ( m 1 * m 2 ) work when m 2 >= m 1 . In the process of designing the above algorithm, we also obtain some new periodicity properties of two-dimensional patterns.
Parallel Suffix Tree Construction: We consider the problem of constructing the suffix tree of a given string s of length m in parallel. An O ( m )-work, O ( m )-space, O ( log 4 m )-time CREW-PRAM algorithm for constructing the suffix tree of s is obtained when s is drawn from any fixed alphabet set. This is the first work and space optimal parallel algorithm known for this problem. It can be generalized to construct the suffix tree of a string s drawn from any general alphabet set to perform in O ( log 4 m ) time,
$O(m\log |\Sigma|)$
work, and$O(m\log |\Sigma|)$
space, after the characters in s have been sorted alphabetically; here$|\Sigma|$
is the number of distinct characters in s . In this case too, the algorithm is work optimal. -
Ph.D. Thesis
1994
Compilation of Array-Style Programs for Distributed Memory MIMD Machines: a Geometric Approach
Katz, Alex
Abstract
|
PDF
Title: Compilation of Array-Style Programs for Distributed Memory MIMD Machines: a Geometric Approach
Candidate: Katz, Alex
Advisor(s): Schonberg, Edmond
Abstract:
Distributed memory MIMD (Multiple Instruction Multiple Data) machines are emerging as a cost-effective means of speeding up numerically intensive programs. They scale more easily than other parallel machines. But writing explicitly parallel programs for these machines is both difficult and error prone. Compilers for languages like HPF make the task easier by generating the necessary inter-processor communication from the data distribution directives supplied by the programmer. This dissertation shows that for a large class of array-style programs automatic data distribution can produce a significant speedup on a distributed memory MIMD machine. Array-style programs use array primitives to manipulate entire arrays, rather than looping explicitly over the array elements. APL programs are typically array-style.
We show how to apply automated data distribution to APL programs, that treat arrays and operations on them as atomic. Automated data distribution determines the necessary inter-processor communication from the way APL primitives manipulate the entire arrays, rather than by complex algebraic analysis of the patterns of array subscripts, as would be done in more conventional compilers. A simple distribution and alignment scheme automatically distributes arrays across available processors. Arrays can be dynamic, with sizes varying during program execution. Data distribution is guided by array size estimates. Distribution trade-off analysis attempts to optimize the initial distribution by comparing the estimated communication and computation times, and replicating arrays whose partitioning results in excessive communication.
Building on the APL to C compiler developed by W.-M. Ching, we produce explicitly parallelized C, from APL source programs. We describe the parallel implementation of most of the APL primitives. The implementation of several APL primitives uses the monotonic data movement algorithm. The ideas developed are demonstrated with eight APL programs of varying complexity. We show the speedup and efficiency obtained when running these programs on 2 to 32 processors. The speedup achieved on 32 processors, ranging from 7 to 30, shows the technique to be applicable to a wide range of programs.
-
TR1994-676
1994
An Optimal Preconditioner for a Class of Saddle Point Problems with a Penalty Term
Klawonn, A.
Abstract
|
PDF
Title: An Optimal Preconditioner for a Class of Saddle Point Problems with a Penalty Term
Author(s): Klawonn, A.
Abstract:
Iterative methods are considered for a class of saddle point problems with a penalty term arising from finite element discretizations of certain elliptic problems. An optimal preconditioner which is independent of the and the penalty parameter is constructed. This approach is then used to design an iterative method with a convergence rate independent of the Lam\'{e} parameters occuring in the equations of linear elasticity.
Please see revised version tr683.
-
TR1994-677
1994
New Estimates for Ritz Vectors
Knyazev, A.
Abstract
|
PDF
Title: New Estimates for Ritz Vectors
Author(s): Knyazev, A.
Abstract:
The followiing estimate for the Rayleigh--Ritz method is proved: $$ | \tilde \lambda - \lambda | |( \tilde u , u )| \le { \| A \tilde u - \tilde \lambda \tilde u \| } \sin \angle \{ u ; \tilde U \}, \ \| u \| =1. $$ Here $A$ is a bounded self-adjoint operator in a real Hilbert/euclidian space, $\{ \lambda, u \}$ one of its eigenpairs, $\tilde U$ a trial subspace for the Rayleigh--Ritz method, and $\{ \tilde \lambda, \tilde u \}$ a Ritz pair. %$\| u \| = \| \tilde u \| = 1.$ This inequality makes it possible to analyze the fine structure of the error of the Rayleigh--Ritz method, in particular, it shows that $ |( \tilde u , u )| \le C \epsilon^2, $ if an eigenvector $u$ is close to the trial subspace with accuracy $\epsilon$ and a Ritz vector $\tilde u$ is an $\epsilon$ approximation to another eigenvector, with a different eigenvalue. Generalizations of the estimate to the cases of eigenspaces and invariant subspaces are suggested, and estimates of approximation of eigenspaces and invariant subspaces are proved.
-
Ph.D. Thesis
1994
Lazy SETL Debugging with Persistent Data Structures
Liu, Zhiqing
Abstract
|
PDF
Title: Lazy SETL Debugging with Persistent Data Structures
Candidate: Liu, Zhiqing
Advisor(s): Schwartz, Jack
Abstract:
Debugging tools have been traditionally difficult to use, particularly in accumulating and exploring program runtime information. This dissertation addresses these issues by proposing a lazy debugging approach, which postpones investigation of debugging hypothesis until complete runtime history is available. This approach encourages a systematic way of debugging and supports many high-level debugging facilities. Recent advance in persistent data structures reduces the time and memory space overhead incurred in recording and storing execution events drastically, and also makes the overhead easily manageable.
To demonstrate this approach, a visual SETL debugger prototype has been designed and implemented based on D. Bacon's SETL translator. This debugger has a persistent runtime system designed using the persistent data structures of the node splitting type, developed by Driscoll, et al. It can efficiently record changes in program execution state under different recording granularities, along with supporting normal SETL executions. Users of this debugger are provided with a graphical interface, which supports many powerful tools, such as forward/backward control/data breakpoints, interactive variable printing, program animation, and re-execution from an recorded execution moment.
A strong set of conclusions can be drawn from an evaluation of the debugger's performance and usability issues, as well as the limitations and open questions of this debugging approach.
- TR1994-671 1994 Schwarz Preconditioners for Elliptic Problems with Discontinuous Coefficients Using Conforming and Non-Conforming Elements Martins, M.S. Abstract | PDF
-
Ph.D. Thesis
1994
Searching for Strings and Searching in Presence of Errors
Muthukrishnan, S.
Abstract
|
PDF
Title: Searching for Strings and Searching in Presence of Errors
Candidate: Muthukrishnan, S.
Advisor(s): Spencer, Joel
Abstract:
This dissertation deals with two classes of searching problems. The first class consists of pattern matching problems, and the second class comprises combinatorial searching problems in presence of errors in response to the queries. Our results are as follows.
Standard Stringology. Standard Stringology is the study of pattern matching problems in which a text location matches one in the pattern provided the associated symbols are identical. The basic problem here is the string matching problem of detecting all occurrences of a pattern string in a text string. This naturally generalizes to the dictionary matching problem of finding all occurrences of a set of patterns, rather than a single pattern, in a given text. Very fast optimal parallel algorithms exist for string matching in the PRAM model. These algorithms rely on structural properties of the strings. Unfortunately these structural properties are not useful for solving the dictionary matching problem. We have obtained the fastest and the most work-efficient algorithms known for this problem and a number of its variants by introducing and using a new technique called shrink-and-spawn .
Non-Standard Stringology. In problems from Non-Standard Stringology, an arbitrary many-to-many matching relation holds between the text and pattern locations. An example is string matching with ``don't cares'' where the position in the text that has a ``don't care'' symbol matches every pattern position. The inherent complexity and structure of such non-standard string matching problems is not well understood. Our main results are inherent complexity bounds for these problems, characterized in terms of algebraic convolutions. Traditionally structure in pattern matching has meant repetitions in patterns, but this work exposes a novel graph-theoretic structure in these problems.
Searching in presence of errors. Given a set of items containing one or more distinguished items, the generic combinatorial search problem is to determine the distinguished item(s) using detection tests on groups of items. Motivated by fault-tolerance issues, we consider the scenario when some tests get incorrect responses. We have developed a strategy to solve the generic problem above using at most one test more than that necessary, even under adversarial placement of incorrect responses to the tests.
-
Ph.D. Thesis
1994
Visual Programming
Nickerson, Jeffrey
Abstract
|
PDF
Title: Visual Programming
Candidate: Nickerson, Jeffrey
Advisor(s): Schonberg, Edmond
Abstract:
While computer science textbooks and classroom lectures are filled with diagrams, and much of our design activity as programmers takes place on whiteboards, we write our pro- grams as text. Proponents of visual programming suggest that we should take advantage of graphic user interface technol- ogy and draw rather than write our programs. This disserta- tion examines the extent to which this is possible, address- ing the question of how graphic representation can best be used in the process of programming.
The use of diagrams in the field of computer science is thoroughly surveyed, and some underlying principles identi- fied. The visual conventions of Adjoinment, Linking, and Enclosure are defined and illustrated. Three languages are developed - a simple programming language that encompasses shell commands, a visual version of APL, and a visual front end for Mathematica. The visual version of APL is notable in that it presents both a program and instances of data under- going transformation as part of one unified diagram.
Building on the work of R. J. A. Buhr, new visual systems designing conventions are created to handle the intricacies of facilities in the Ada9X language. Asynchronous transfers of control, requeueing, and generic formal parameters are addressed. The asynchronous transfer of control convention is suitable for CASE representations of the language con- struct, and can be easily animated.
Some existing software metrics are modified for use in analyzing diagrams, and two new metrics are proposed: graphic token count and diagram class complexity. A graphic design measure, data density, is transformed into a computer science measure, token density. Using these metrics, graphic representations can be compared to each other and to textual representations. From this, a strong set of conclusions are drawn about the relative strengths of graphic and textual representation, as well as the limits and possibilities of graphic representation in programming.
- TR1994-661 1994 A Polylogarithmic Bound for an Iterative Substructuring Method for Spectral Elements in Three Dimensions Pavarino, L.; Widlund, O. Abstract | PDF
- TR1994-663 1994 Iterative Substructuring Methods for Spectral Elements: Problems in Three Dimensions Based on Numerical Quadrature Pavarino, L.; Widlund, O. Abstract | PDF
-
TR1994-672
1994
Planning Paths of Minimal Curvature
Sellen, J.
Abstract
|
PDF
Title: Planning Paths of Minimal Curvature
Author(s): Sellen, J.
Abstract:
We consider the problem of planning curvature constrained paths amidst polygonal obstacles, connecting given start and target configurations. Let the critical curvature Rc be the minimal curvature for which a constrained path exists. We describe an algorithm, which approximates the critical curvature and finds a corresponding path. Further, we give an efficient decision procedure to determine if there exists a path satisfying a given curvature constraint R, with running time polynomial in |R-Rc|/R.
-
TR1994-673
1994
Simple Multi Function Vision System for 3D Data Acquisition
Sokolov, S. M.;
Max, D. P.; Wallace, R. S.
Abstract
|
PDF
Title: Simple Multi Function Vision System for 3D Data Acquisition
Author(s): Sokolov, S. M.; Max, D. P.; Wallace, R. S.
Abstract:
We have developed a simple multi function vision system for 3D data acquisition for a wide range of applications in robotics and automation. The system uses one CCD video camera and an active directed laser light source based on a direct drive spherical pointing motor (SPM). The anatomy of the system and algorithms used are described. System calibration methods and measurements of accuracy of the outputs are presented. A list of applications is shown.
-
TR1994-669
1994
Scaling Direct Drive Robots
Wallace, R.;
Selig, J.
Abstract
|
PDF
Title: Scaling Direct Drive Robots
Author(s): Wallace, R.; Selig, J.
Abstract:
Recent experimental and analytical evidence indicates that direct drive robots become very practical and economical at miniature and microscopic scales, so it is interesting to understand quantitatively the properties of direct drive robots under scaling transformations. This leads to a study of how screws and their dual co-screws behave under the group of similarity transforms. This group is the group of isometries together with dilations. Several different representations are found on the space of screws and complementary representations are found on the dual space of co-screws. From the electromagnetic theory of the force and torque on a magnet in a magnetic field, we derive the scaling properties of the electromagnetic wrench. Hence, these results can be directly applied to the scaling of direct drive motors [1]. We conclude by proposing a scale-invariant measure for direct drive actuator performance.
-
TR1994-655
1994
Pscheme: Extending Continuations to Express Control and Synchronization in a Parallel LISP
Yao, C.;
Goldberg, B.
Abstract
|
PDF
Title: Pscheme: Extending Continuations to Express Control and Synchronization in a Parallel LISP
Author(s): Yao, C.; Goldberg, B.
Abstract:
In this paper, we describe Pscheme, a parallel dialect of Scheme. The primary construct for specifying parallelism, synchronization, and communication is a natural extension of first-class continuations which we call a port. We describe the behavior of ports, along with the other parallel constructs of Pscheme. Because the user has precise control over the parallel computation, the Pscheme constructs can be used to build higher-level parallel programming abstractions, such as futures, semaphores, and Ada-style rendezvous. We provide the Pscheme code for these abstractions and discuss the current implementation of Pscheme on a shared-memory multiprocessor.
-
TR1994-667
1994
Representing Control in Parallel Applicative Programing
Yao, C.
Abstract
|
PDF
Title: Representing Control in Parallel Applicative Programing
Author(s): Yao, C.
Abstract:
This research is an attempt to reason about the control of parallel computation in the world of applicative programming languages.
Applicative languages, in which computation is performed through function application and in which functions are treated as first-class objects, have the benefits of elegance, expressiveness and having clean semantics. Parallel computation and real-world concurrent activities are much harder to reason about than the sequential counterparts. Many parallel applicative languages have thus hidden most control details with their declarative programming styles, but they are not expressive enough to characterize many real world concurrent activities that can be easily explained with concepts such as message passing, pipelining and so on.
Ease of programming should not come at the expense of expressiveness. Therefore, we design a parallel applicative language Pscheme such that programmers can express explicitly the control of parallel computation while maintaining the clean semantics and the ease of programming of applicative languages. In Pscheme, we propose the concept of ports to model the general control in parallel computation. Through program examples, we show how Pscheme and ports support various parallel programming paradigms. We have also built libraries for higher level control facilities with ports so that programming in Pscheme becomes easier.
We provide an operational semantics for Pscheme, and develop a compiler and a run time system on NYU's Ultracomputer. Our experiments with parallel programs have shown satisfactory speedup. We claim that ports are the natural parallel extensions of continuations in sequential computation, and thus conclude that representing general control in parallel applicativeprogramming is feasible.
-
Ph.D. Thesis
1994
Representing Control in Parallel Applicative Programming
Yao, Chi
Abstract
|
PDF
Title: Representing Control in Parallel Applicative Programming
Candidate: Yao, Chi
Advisor(s): Goldberg, Benjamin
Abstract:
This research is an attempt to reason about the control of parallel computation in the world of applicative programming languages.
Applicative languages, in which computation is performed through function application and in which functions are treated as first-class objects, have the benefits of elegance, expressiveness and having clean semantics. Parallel computation and real-world concurrent activities are much harder to reason about than the sequential counterparts. Many parallel applicative languages have thus hidden most control details with their declarative programming styles, but they are not expressive enough to characterize many real world concurrent activities that can be easily explained with concepts such as message passing, pipelining and so on. Ease of programming should not come at the expense of expressiveness. Therefore, we design a parallel applicative language Pscheme such that programmers can express explicitly the control of parallel computation while maintaining the clean semantics and the ease of programming of applicative languages. In Pscheme, we propose the concept of ports to model the general control in parallel computation. Through program examples, we show how Pscheme and ports support various parallel programming paradigms. We have also built libraries for higher level control facilities with ports so that programming in Pscheme becomes easier.
We provide an operational semantics for Pscheme, and develop a compiler and a run time system on NYU's Ultracomputer. Our experiments with parallel programs have shown satisfactory speedup. We claim that ports are the natural parallel extensions of continuations in sequential computation, and thus conclude that representing general control in parallel applicative programming is feasible.
-
TR1993-630
1993
The Cell Programming Language
Agarwal, P.
Abstract
|
PDF
Title: The Cell Programming Language
Author(s): Agarwal, P.
Abstract:
We describe the Cell Programming Language (CPL), which we have designed to write programs that mimic the life of a biological cell. The aim is to study the complex interactions between cells which lead to diverse shapes of cellular aggregates.
Each cell is treated as a two-dimensional homogeneous polygon with a specific area. A cell goes through a series of states in its lifetime. In each state, one can specify the cell's growth rate; information about cell division and cell differentiation; the chemical constituents of the cell and their interactions; and cell motion. This behavior may be conditional based on the cell's own status (chemical concentrations and size) or on its neighborhood (the type of cells surrounding it, the contact lengths with each of them, their areas, the directions to these cells, and their chemical concentrations). The language is explored by modeling cellular sorting in vitro, and aggregation in the Dictyostelium discoidea (cellular slime mold).
-
Ph.D. Thesis
1993
Cell-based Computer Models in Developmental Biology
Agarwal, Pankaj
Abstract
|
PDF
Title: Cell-based Computer Models in Developmental Biology
Candidate: Agarwal, Pankaj
Advisor(s): Schwartz, Jacob T.
Abstract:
In developmental biology, modeling and simulation play an important role in understanding cellular behavior. We suggest a simple language, the Cell Programming Language (CPL), to write computer programs to describe this behavior. Using these programs, it is possible to simulate and visualize cell behavior.
A genome is the program for the development of an organism. The genome, in conjunction with the environment, determines the behavior of each cell of the organism. The program for each cell (written in CPL) plays the role of its genome. The program for an individual cell consists of a set of states. In each state, rules are specified which determine the cell properties (i.e. shape, motility, concentrations of various molecular species, etc.). Different states of the same cell signify different phases in the cell's life. Each cell has a tissue type associated with it. Cells of the same tissue type execute the same CPL program.
We use the discrete time simulation model. At every time step, each cell executes all the instructions in its present state sequentially. All cells are assumed to be executing in parallel, with synchronization performed after every time step.
The cells are two-dimensional. Each cell has a physical location comprising a collection of discrete connected points. This physical presence imparts to the cells the attributes of area, perimeter, and neighbors (other cells). The neighbor attribute forms the basis for all intercellular communication.
The language contains features for specifying:
- the location, area, and shape of the cells;
- the concentrations of various chemicals in each cell, the equations of their catalysis, and diffusion;
- the direction and speed of cell motion;
- the rates of cell growth and division;
- cell differentiation: the evolution of cell behavior during its lifetime.
We have employed CPL to model the following: aggregation in cellular slime mold in response to a chemotactic agent; the formation of skeletal elements in the vertebrate limb; and cellular segregation due to differential adhesion.
-
TR1993-635
1993
A Language for Semantic Analysis
Cai, J.
Abstract
|
PDF
Title: A Language for Semantic Analysis
Author(s): Cai, J.
Abstract:
Semantic analysis is important for compilers. In the APTS program transformation system, semantics is specified by rules in the language RSL. The semantic rules are interpreted by APTS to generate the semantic information of the program, which is then used by the rewriting engine for program translation. This approach has proved to be convenient and powerful in our construction of a SETL-to-C compiler. In this paper, we discuss the features, applications, implementation strategy, and performance of RSL.
-
Ph.D. Thesis
1993
Applications of Convexity in Computational Geometry
Capoyleas, Vasilis
Abstract
|
PDF
Title: Applications of Convexity in Computational Geometry
Candidate: Capoyleas, Vasilis
Advisor(s): Pach, Janos; Pollack, Richard
Abstract:
We present seven results in computational geometry. The concept of convexity plays a vital role in all seven of the results; either as a tool in the proof method or as a means of giving a formal definition.
The topics considered are:
Weak
$\epsilon$
-nets: We provide strong upper bounds for the size of the smallest weak [IMAGE ] -net of a set of points, in two basic cases.Geometric Clusterings: We provide the first polynomial algorithm to find an optimal clustering of a set of points in the plane. The optimality criteria are based on the diameter and radius of the clusters.
The Hadwiger-Kneser-Thue Poulsen conjecture: This famous 40 year old conjecture states that the area of the union of a set of disks is diminished if the disks are pushed together. We provide two partial results to this conjecture.
Grasping: We consider grasping of polygonal objects by a pair of parallel jaws. We define a model and prove that a fairly large class of polygons can be grasped under this model.
Graph drawing and crossing numbers: We consider the problem of estimating the maximum number of edges for graphs that satisfy some sort of a relaxed planarity condition. We provide exact bounds for an important special case.
-
Ph.D. Thesis
1993
New Techniques for the Analysis and Implementation of Functional Programs
Chuang, Tyng-Ruey
Abstract
|
PDF
Title: New Techniques for the Analysis and Implementation of Functional Programs
Candidate: Chuang, Tyng-Ruey
Advisor(s): Goldberg, Benjamin
Abstract:
Functional programming languages provide programmers with clean semantics and side-effect free computation, which make easier the tasks of designing programs and reasoning about them. Efficient implementations of purely functional programs, however, can pose certain challenges. Our purpose in this dissertation is to develop new techniques for the efficient analysis and implementation of functional programs.
Our first goal is to investigate a syntactic approach, contrary to the usual semantic approaches, of finding the least fixed points of higher-order functions over finite domains. The second objective is to develop implementation techniques for aggregate data structures for functional programs such that accesses to aggregates are both efficient and side-effect free.
Finding the least fixed point of a monotonic function over a finite domain is an essential task when analyzing a functional program in the framework of abstract interpretation. Previous methods for least fixed point finding have primarily used semantic approaches, which often traverse large portions of the semantic domain and may be very inefficient even for simple programs. We propose a syntactic method based on an augmented simply typed lambda calculus. It is shown that, for finite domains, the syntactic method is both sound and complete with respect to the semantics. Moreover, we demonstrate that the proposed syntactic method can be quite effective in cases where the usual semantic method is very inefficient.
Efficient implementations of aggregate data structures for functional programs has been an active research topic. The problem arises because once an aggregate is updated, both the old version and newly updated copy must be preserved to maintain the side-effect free semantics of functional languages. We modify the shallow binding scheme of Baker to implement functional arrays for efficient incremental updates and voluminous reads. The scheme, however, uses side-effects and cannot be implemented in purely functional languages themselves. We then investigate the possibility of implementing efficient aggregates without using side-effects, and show that real-time deques can be implemented in a purely functional way. We describe several interesting applications of this technique.
-
TR1993-637
1993
Knowledge Preconditions for Plans
Davis, Ernest
Abstract
|
PDF
Title: Knowledge Preconditions for Plans
Author(s): Davis, Ernest
Abstract:
For an agent to be able to rely on a plan, he must know both that he is physically capable of carrying out the physical actions involved, and that he knows enough to carry out the plan. In this talk, we advance and discuss new definitions of "knowing enough to carry out a plan", for the case of a single agent carrying out a sequence of primitive actions one at a time. We consider both determinate and indeterminate plans.
We show how these definition can be expressed in a formal logic, using a situation calculus model of time and a possible worlds model of knowledge. The definitions strictly subsume previous theories for the single-agent case without concurrent actions.
We illustrate the power of the definition by showing that it supports results of the following kinds:
- Positive verification: Showing that a plan is feasible.
- Negative verification: Showing that a plan is infeasible.
- Monotonicity: The more an agent knows, the more plans are executable.
- Reduction for omniscient agent: For an omniscient agent, a plan is epistemically feasible if and only if it is physically feasible.
- Simple recursive rules that are sufficient conditions for the feasibility of a plan described as a sequence or a conditional combination of subplans.
-
TR1993-638
1993
Schwarz Analysis of Iterative Substructuring Algorithms for Elliptic Problems in Three Dimensions
Dryja, M.;
Smith, B.; Widlund, O.
Abstract
|
PDF
Title: Schwarz Analysis of Iterative Substructuring Algorithms for Elliptic Problems in Three Dimensions
Author(s): Dryja, M.; Smith, B.; Widlund, O.
Abstract:
638 SCHWARZ ANALYSIS OF ITERATIVE SUBSTRUCTURING ALGORITHMS FOR ELLIPTIC PROBLEMS IN THREE DIMENSIONS M. Dryja, B. Smith, O. Widlund, May 1993 Domain decomposition methods provide powerful preconditioners for the iterative solution of the large systems of algebraic equations that arise in finite element or finite difference approximations of partial differential equations. The preconditioners are constructed from exact or approximate solvers for the same partial differential equation restricted to a set of subregions into which the given region has been divided. In addition, the preconditioner is often augmented by a coarse, second level approximation, that provides additional, global exchange of information, and which can enhance the rate of convergence considerably. The iterative substructuring methods, based on decompositions of the region into non-overlapping subregions, form one of the main families of such Many domain decomposition algorithms can conveniently be described and analyzed as Schwarz methods. These algorithms are fully defined in terms of a set of subspaces and auxiliary bilinear forms. A general theoretical framework has previously been developed and, in this paper, these techniques are used in an analysis of iterative substructuring methods for elliptic problems in three dimensions. A special emphasis is placed on the difficult problem of designing good coarse models and obtaining robust methods for which the rate of convergence is insensitive to large variations in the coefficients of the differential equation. Domain decomposition algorithms can conveniently be built from modules, which represent local and global components of the preconditioner. In this paper, a number of such possibilities is explored and it is demonstrated how a great variety of fast algorithms can be designed and analyzed.
-
TR1993-626
1993
Schwarz Methods of Neumann-Neumann Type for Three-Dimensional Elliptic Finite Element Problems
Dryja, M.;
Widlund, O.
Abstract
|
PDF
Title: Schwarz Methods of Neumann-Neumann Type for Three-Dimensional Elliptic Finite Element Problems
Author(s): Dryja, M.; Widlund, O.
Abstract:
Several domain decomposition methods of Neumann-Neumann type are considered for solving the large linear systems of algebraic equations that arise from discretizations of elliptic problems by finite elements. We will only consider problems in three dimensions. Several new variants of the basic algorithm are introduced in a Schwarz method framework that provides tools which have already proven very useful in the design and analysis of other domain decomposition and multi-level methods.
The Neumann-Neumann algorithms have several advantages over other domain decomposition methods. The subregions, which define the subproblems, only share the boundary degrees of freedom with their neighbors. The subregions can also be of quite arbitrary shape and many of the major components of the preconditioner can be constructed from subprograms available in standard finite element program libraries. However, in its original form, the algorithm lacks a mechanism for global transportation of information and its performance therefore suffers when the number of subregions increases. In the new variants of the algorithms, considered in this paper, the preconditioners include global components, of low rank, to overcome this difficulty. Bounds are established for the condition number of the iteration operator, which are independent of the number of subregions, and depend only polylogarithmically on the number of degrees of freedom of individual local subproblems. Results are also given for problems with arbitrarily large jumps in the coefficients across the interfaces separating the subregions.
-
Ph.D. Thesis
1993
Nonholonomic Motion Planning : Algorithms and Software
Fernandes, Christopher
Abstract
|
PDF
Title: Nonholonomic Motion Planning : Algorithms and Software
Candidate: Fernandes, Christopher
Advisor(s): Mishra, Bud
Abstract:
Robot motion planning with nonholonomic constraints has recently engaged the attention of roboticists, as its application in dexterous manipulation, mobile robots and space robotics has begun to be understood. Such constraints arise from two different sources - Rolling Constraints and Non-Integrable Conservation Laws. For instance, the kinematics of dexterous manipulation using hard fingers making contact on a hard object requires nonholonomic motion planning (NMP) in order to satisfy rolling constraint. On the other hand, the control of attitude of space platform-based manipulators using only the internal motion of their manipulator joints requires NMP, as a result of the law of conservation of angular momentum.
Recently some algorithms and their implementation in software have been created in order to understand, simulate and control nonholonomic systems. Currently most of the algorithms have been demonstrated in somewhat specialized applications. There is a great need for software that enables the researcher to quickly test algorithms on these simple systems and then experiment with potential generalizations.
In this thesis, we describe a software system that we have developed at NYU and the underlying principles and algorithms (the ``Basis algorithm''). The system runs on SGI Iris, is written in C with auxiliary tools from Unix, Mathematica, DASSL etc. We shall also describe how we have designed controllers for such example nonholonomic systems as unicyle, space station and space platform-mounted robot manipulator.
It is hoped that this thesis will be useful for the control engineers engaged in designing non-linear control systems, for roboticists studying dexterous manipulations, motion planning and space robotics and finally, for software engineers interested in building tools and applications for robotics.
-
TR1993-628
1993
The Complexity of Resolvent Resolved
Gallo, G.;
Mishra, B.
Abstract
|
PDF
Title: The Complexity of Resolvent Resolved
Author(s): Gallo, G.; Mishra, B.
Abstract:
The concept of a resolvent of a prime ideal was originally introduced by J.F. Ritt along with the notion of a characteristic set. The motivation for studying resolvents comes from its connections with the birational isomorphisms that describe structures of irreducible algebraic varieties by means of an equivalent hypersurface and a one-to-one rational map. As a result, these ideas have a wide range of applications in such areas as solid modeling, computer aided design and manufacturing. An algorithm to compute the resolvent by means of characteristic sets was first proposed by Ritt. This and some related algorithms have resurfaced as interest in resolvent structures have grown, spurred by its applicability.
Unfortunately, the algebraic complexity of the resolvent and the computational complexity of the associated algorithms have never been satisfactorily explored. In this paper, we give single exponential upper and lower bounds for the degrees of the resolvent and its associated parametrizing polynomials. We also show that the resolvent can be computed deterministically in single exponential sequential and polynomial parallel time complexity. All previous algorithms for resolvent had relied on a random choice of certain extraneous parameters.
-
Ph.D. Thesis
1993
Dynamic Impact Analysis: Analyzing Error Propagation in Program Executions
Goradia, Tarak
Abstract
|
PDF
Title: Dynamic Impact Analysis: Analyzing Error Propagation in Program Executions
Candidate: Goradia, Tarak
Advisor(s): Weyuker, Elaine
Abstract:
Test adequacy criteria serve as rules to determine whether a test set adequately tests a program. The effectiveness of a test adequacy criterion is determined by its ability to detect faults. For a test case to detect a specific fault, it should execute the fault, cause the fault to generate an erroneous state and propagate the error to the output. Analysis of previously proposed code-based testing strategies suggests that satisfying the error propagation condition is both important and expensive. The technique of dynamic impact analysis is proposed for analyzing a program execution and estimating the error propagation behavior of various potential sources of errors in the execution. Impact graphs are introduced to provide an infrastructure supporting the analysis. A program impact graph modifies the notion of a program dependence graph proposed in the literature in order to capture some of the subtle impact relationships that exist in a program. An execution impact graphs represents the dynamic impact relationships that are demonstrated during a program execution. The notion of impact strength is defined as a quantitative measure of the error sensitivity of an impact. A cost-effective algorithm for analyzing impact relationships in an execution and computing the impact strengths is presented. A research prototype implemented to demonstrate the feasibility of dynamic impact analysis is briefly described. The time complexity of dynamic impact analysis is shown to be linear with respect to the original execution time, and experimental measurements indicate that the constant of proportionality is a small number. The experiments undertaken to validate the computation of impact strengths are presented. An experience study relating impact strengths to error propagation in faulty programs is also presented. The empirical results provide evidence indicating a strong positive correlation between impact strength and error propagation. The results also emphasize the need for better heuristics to improve the accuracy of the error propagation estimates. Potential applications of dynamic impact analysis to mutation testing, syntactic coverage-based testing and dynamic program slicing are discussed.
- TR1993-645 1993 Norms of Functions of Matrices Greenbaum, A. Abstract | PDF
-
TR1993-624
1993
A Hybrid Algorithm for Optimizing Eigenvalues of Symmetric Definite Pencils
Haeberly, J.;
Overton, M.
Abstract
|
PDF
Title: A Hybrid Algorithm for Optimizing Eigenvalues of Symmetric Definite Pencils
Author(s): Haeberly, J.; Overton, M.
Abstract:
We present an algorithm for the optimization of the maximum eigenvalue of a symmetric definite pencil depending affinely on a vector of parameters. The algorithm uses a hybrid approach, combining a scheme based on the method of centers, developed by Boyd and El Ghaoui, with a new quadratically convergent local scheme. A convenient expression for the generalized gradient of the maximum eigenvalue of the pencil is also given, expressed in terms of a dual matrix. The algorithm computes the dual matrix which establishes the optimality of the computed solution.
-
TR1993-641
1993
Chacterization of Self-Similar with Wavelet Maxima
Hwang, W.;
Mallat, S.
Abstract
|
PDF
Title: Chacterization of Self-Similar with Wavelet Maxima
Author(s): Hwang, W.; Mallat, S.
Abstract:
Self-similar multifractals have a wavelet transform whose maxima define self-similar curves in the scale-space plane. We introduce an algorithm that recovers the affine self-similarity parameters through a voting procedure in the corresponding parameter space. The voting approach is robust with respect to renormalization noises and can recover the value of parameters having random fluctuations. We describe numerical applications to Cantor measures, dyadique multifractals and to the study of Diffusion Limited Aggregates.
-
Ph.D. Thesis
1993
Singularity Detection, Noise Reduction and Multifractal Fractal Characterization
Hwang, Wen-Liang
Abstract
|
PDF
Title: Singularity Detection, Noise Reduction and Multifractal Fractal Characterization
Candidate: Hwang, Wen-Liang
Advisor(s): Mallat, Stephane
Abstract:
Most of a signal information is often carried by singularities. We study the characterization of the singularities with the wavelet transform and its modulus maxima. We introduce numerical algorithm to detect and characterize pointwise singularities from the behavior of the wavelet transform maxima across scales. As an application, we develop a denoising algorithm which discriminates the signal information from noise through an analysis of local singularities. In one dimension, we recover a piecewise smooth signal, where the sharp transitions are preserved. In two dimensions, the wavelet maxima algorithm detects and characterizes the edges. The geometrical properties of edges are used to discriminate the noise from the image information and the denoising algorithm restores sharp images even at low SNR.
Multifractals are singular signals having some self-similarity properties. We develop a robust algorithm to extract the fractal parameter of fractional Brownian motion embedded in white noise. Fractal parameters are estimated from the evolution of the variance of the wavelet coefficients across scales with a modified penalty method. Self-similar multifractals have a wavelet transform whose maxima define self-similar curves in the scale-space plane. We introduce an algorithm to recover the affine self-similar parameters with a voting procedure. This voting strategy is robust with respect to renormalization noise. We describe the numerical applications to Cantor measures, dyadique multifractals and to the study of diffusion limited aggregates.
-
TR1993-639
1993
Competitive Algorithms and Lower Bounds for On-Line Scheduling of Multiprocessor Real-Time Systems
Koren, G.;
Shasha, D.
Abstract
|
PDF
Title: Competitive Algorithms and Lower Bounds for On-Line Scheduling of Multiprocessor Real-Time Systems
Author(s): Koren, G.; Shasha, D.
Abstract:
We study competitive on-line scheduling in multi-processor real-time environments. In our model, every task has a deadline and a value that it obtains only if it completes by its deadline. A task can be assigned to any processor, all of which are equally powerful. The problem is to design an on-line scheduling algorithm (i.e., the scheduler has no knowledge of a task until it is released) with worst case guarantees as to the total value obtained by the system.
We study systems with two or more processors and with uniform or non-uniform value density. We present an inherent limit on the best competitive guarantee that any on-line parallel real-time scheduler can give. Then we present a competitive algorithm that achieves a worst case guarantee which is only within a factor of 2 from the best possible guarantee in many cases. These are the most general results yet known for parallel overloaded real-time scheduling.
-
Ph.D. Thesis
1993
Competitive On-line Scheduling for Overloaded Real-Time Systems
Koren, Gilad
Abstract
|
PDF
Title: Competitive On-line Scheduling for Overloaded Real-Time Systems
Candidate: Koren, Gilad
Advisor(s): Shasha, Dennis; Mishra, Bud
Abstract:
We study competitive on-line scheduling in uniprocessor and multiprocessor real-time environments. In our model, tasks are sporadic and preemptable. Every task has a deadline and a value that the system obtains only if the task completes its execution by its deadline. The aim of a scheduler is to maximize the total value obtained from all the tasks that complete before their deadline.
An on-line scheduler has no knowledge of a task until it is released. The problem is to design an on-line scheduler with worst case guarantees even in the presence of overloaded periods. The guarantee is given in terms of a positive competitive factor. We say that an on-line algorithm has a competitive factor of r , 0 < r <= 1, when under all possible circumstances (i.e, task sets) the scheduler will get at least r times the best possible value. The best value is the value obtained by a clairvoyant algorithm. In contrast to an on-line scheduler, the clairvoyant algorithm knows the entire task set a priori at time zero.
When a uniprocessor system is underloaded there exist several optimal on-line algorithms that will schedule all tasks to completion (e.g., the Earliest Deadline First algorithm). However, under overload, these algorithms perform poorly. Heuristics have been proposed to deal with overloaded situations but these give no worst case guarantees.
We present an optimal on-line scheduling algorithm for uniprocessor overloaded systems called D-over. D-over is optimal in the sense that it has the best competitive factor possible. Moreover, while the system is underloaded, D-over will obtain 100% of the possible value.
In the multiprocessor case, we study systems with two or more processors. We present an inherent limit (lower bound) on the best competitive guarantee that any on-line parallel real-time scheduler can give. Then we present a competitive algorithm that achieves a worst case guarantee which is within a small factor from the best possible guarantee in many cases.
These are the most general results yet known for competitive scheduling of multiprocessor real-time systems.
-
TR1993-619
1993
Matching Pursuits with Time-Frequency Dictionaries , Rev
Mallat, S.;
Zhang, Z.
Abstract
|
PDF
Title: Matching Pursuits with Time-Frequency Dictionaries , Rev
Author(s): Mallat, S.; Zhang, Z.
Abstract:
We introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions. These waveforms are chosen in order to best match the signal structures. Matching pursuits are general procedures to compute adaptive signal representations. With a dictionary of Gabor functions, a matching pursuit defines an adaptive time-frequency transform. We derive a signal energy distribution in the time-frequency plane, which does not include interference terms, unlike Wigner and Cohen class distributions [?]. A matching pursuit isolates the signal structures that are coherent with respect to a given dictionary. An application to pattern extraction from noisy signals is described. We compare a matching pursuit decomposition with a signal expansion over an optimized wavepacket orthonormal basis, selected with the algorithm of Coifman and Wickerhauser.
-
TR1993-652
1993
Feedback Control of Miniature Direct Drive Devices
Max, D.;
Wallace, R.
Abstract
|
PDF
Title: Feedback Control of Miniature Direct Drive Devices
Author(s): Max, D.; Wallace, R.
Abstract:
We discuss dynamics and control of miniature direct drive actuators, specifically for the two axis spherical pointing motor and for the one axis direct drive finger joint. These actuators can move both accurately and at high speed, however the capabilities of these devices cannot be fully exploited using open loop control techniques. We derive an ideal PID feedback control scheme. Our initial experiments indicate that PID feedback control for the SPM and finger joint is highly feasible and a significant improvement over open loop methods.
-
TR1993-650
1993
A Survey of Computational Differential Algebra
Mishra, B.
Abstract
|
PDF
Title: A Survey of Computational Differential Algebra
Author(s): Mishra, B.
Abstract:
In this note, we explore the computational aspects of several problems in differential algebra with concrete applications in dynamics and motion-planning problems in robotics, automatic synthesis of control schemes for nonlinear systems and simulation of physical systems with fixed degrees of freedom.
Our primary aim is to study, compute and structurally describe the solution of a system of differential equations with coefficients in a field (say, the field of complex numbers, ). There seem to have been various approaches in this direction: e.g. ideal theoretic approach of Ritt, Galois theoretic approach of Kolchin and Singer and group theoretic technique of Lie. It is interesting to study their interrelationship and effectivity of various problems they suggest.
In general, these problems are known to be uncomputable; thus, we need to understand under what situations these problems become feasible. As related computer science questions, we also need to study the complexity of these problems, underlying data-structures, effects of the representation (e.g. sparsity).
Of related interest are some closely-related problems such as symbolic integration problem, solving difference equations, integro-differential equations and differential equations with algebraic constraints.
- TR1993-646 1993 Bidirectional Edges Problem, Part I: A Simple Algorithm Mishra, B. Abstract | PDF
- TR1993-647 1993 Bidirectional Edges Problem, Part II: An Efficient Algorithm Mishra, B. Abstract | PDF
-
TR1993-643
1993
ED I: NYU Educational Robot Design and Evaluation
Mishra, B.;
Antoniotti, M.
Abstract
|
PDF
Title: ED I: NYU Educational Robot Design and Evaluation
Author(s): Mishra, B.; Antoniotti, M.
Abstract:
The primary goal of the NYU educational robot project is to create a disseminable, multi-functional and inexpensive laboratory course sequence, aimed at improving the practical skills of undergraduate students specializing in robotics, vision, AI and manufacturing disciplines.
The main work-horse of the NYU educational project was chosen to be a multi-functional ED I robot system, consisting of a 4 DOF DD arm and several auxiliary devices. The system was designed to be simple, inexpensive, flexible and safe.
In this report, we describe the history, design, structure and evaluation of this robot system. We also describe several robotics and related course sequence that can use the ED I system effectively. We also provide some example experiments that have been run on ED I successfully.
This report has benefited from the labor, contribution, discussions, advice and criticisms of several people on the ED I project team and the credit for the final product goes to the entire team.
-
TR1993-653
1993
NYU Educational Robotics Project: A Pedagogic Overview
Mishra, B.;
Antoniotti, M.; Hansen, F.; Wallace, R.
Abstract
|
PDF
Title: NYU Educational Robotics Project: A Pedagogic Overview
Author(s): Mishra, B.; Antoniotti, M.; Hansen, F.; Wallace, R.
Abstract:
The primary goal of the NYU educational robotics project (NYU-ED) is to create a disseminable, multi-functional and inexpensive laboratory course sequence, aimed at improving the practical skills of undergraduate students specializing in robotics, vision, AI and manufacturing disciplines.
The earlier approaches to robotics laboratory education have been to use either industrial robot arms or commercially available low-power arms. In each case, there have been considerable problems in the lack of an ideal interface, in not providing enough flexibility to add other devices, or in the absence of adequate safety. Also, the underlying dynamical model is usually so complicated that performing any control experiment has been beyond the scope of all but advanced graduate students.
In this report, we describe our approach to deal with these problems in constructing a modern robotics laboratory. The main work-horse of the NYU educational project was chosen to be a multi-functional ED I robot system, consisting of a 4 DOF DD arm and several auxiliary devices. The system was designed to be simple, inexpensive, flexible and safe.
We also describe our experience with some advanced laboratory experiments using miniature direct drive robots that have proven to be ideal for teaching as they are reconfigurable, safe and easy to program.
- TR1993-632 1993 Highly Efficient Dictionary Matching in Parallel Muthukrishnan, S.; Palem, K. Abstract | PDF
-
Ph.D. Thesis
1993
Probabilistic Methods in Computer Science and Combinatorics
Narayanan, Babu
Abstract
|
PDF
Title: Probabilistic Methods in Computer Science and Combinatorics
Candidate: Narayanan, Babu
Advisor(s): Boppana, Ravi
Abstract:
Over the last few years, the Probabilistic method has become an important tool in Computer Science and Combinatorics. This thesis deals with three applications of the Probabilistic method.
The first problem concerns a model of imperfect randomness: the slightly-random source of Santha and Vazirani. In a slightly-random source with bias
$\epsilon$
, the conditional probability that the next bit output is 1, given complete knowledge of the previous bits output, is between$1/2 - \epsilon$
and$1/2 +\epsilon$
. We show that, for every fixed$\epsilon < 1/2 $
, and for most sets, the probability of hitting that set using a slightly-random source is bounded away from 0.The second problem arises in parallel and distributed computing. A set of n processors is trying to collectively flip a coin, using a protocol that tolerates a large number of faulty processors. We demonstrate the existence of perfect-information protocols that are immune to sets of
$\epsilon n$
faulty processors, for every fixed$\epsilon< 1/2$
.Finally, we consider a problem in Ramsey theory. Let an adversary color the edges of the Binomial random graph with r colors, the edge probability being
$ c / (\sqrt n)$
, where c is a large enough constant. We show that, almost surely, a constant fraction of the triangles in the graph will be monochromatic. - TR1993-627 1993 Second Derivatives for Optimizing Eigenvalues of Symmetric Matrices Overton, M.; Womersly, R. Abstract | PDF
-
TR1993-631
1993
Towards Second-Order Methods for Structured Nonsmooth Optimization
Overton, M.;
Ye, X.
Abstract
|
PDF
Title: Towards Second-Order Methods for Structured Nonsmooth Optimization
Author(s): Overton, M.; Ye, X.
Abstract:
Structured nonsmooth optimization objectives often arise in a composite form f = h ffi a, where h is convex (but not necessarily polyhedral) and a is smooth. We consider the case where the structure of the nonsmooth convex function h is known. Specifically, we assume that, for any given point in the domain of h, a parameterization of a manifold \Omega , on which h reduces locally to a smooth function, is given. We discuss two affine spaces: the tangent space to the manifold \Omega at a point, and the affine hull of the subdifferential of h at the same point, and explain that these are typically orthogonal complements. We indicate how the construction of locally second-order methods is possible, even when h is nonpolyhedral, provided the appropriate Lagrangian, modeling the structure, is used. We illustrate our ideas with two important convex functions: the ordinary max function, and the max eigenvalue function for symmetric matrices, and we solicit other interesting examples with genuinely different structure from the community.
-
Ph.D. Thesis
1993
Singularity Detection, Dataflow Analysis of Logic Programs Using Typed Domains
Papadopoulos, Georgios
Abstract
|
PDF
Title: Singularity Detection, Dataflow Analysis of Logic Programs Using Typed Domains
Candidate: Papadopoulos, Georgios
Advisor(s): Harrison, Malcolm C.
Abstract:
Dataflow analysis for logic programming languages has been collecting information about properties of whole terms. As a result pessimistic assumptions had to be made about the substerms of a term, missing information that could be used for better compiler optimization and partial evaluation.
We use type information to divide each term into sets of subterms (t-terms) and then collect dataflow information for these sets. We identify and solve several problems as we develop a new sharing analysis for logic programs using our t-terms in a denotational abstract interpretation framework.
- TR1993-648 1993 Iterative Substructuring Methods for Spectral Elements in Three Dimensions Pavarino, L.; Widlund, O. Abstract | PDF
-
Ph.D. Thesis
1993
Statistical Recognition of Textured Patterns From Local Spectral Decomposition
Perry, Adi
Abstract
|
PDF
Title: Statistical Recognition of Textured Patterns From Local Spectral Decomposition
Candidate: Perry, Adi
Advisor(s): Lowe, David
Abstract:
Unsupervised segmentation of an image into homogeneously textured regions and the recognition of known texture patterns have been important tasks in computer vision. This thesis presents a new set of algorithms and describes an implemented system which performs these tasks. Initial features are computed from a local multi-channel spectral decomposition of the image that is implemented with Gabor filters. Textures are not assumed to have a band limited frequency spectrum and there is no supposition regarding the image contents: it may contain some unknown texture patterns or regions with no textures at all. Stability of features is enhanced by employing a method for smoothing reliable measurements. Both recognition and segmentation procedures use robust statistical algorithms and are performed locally for small image patches. In particular, statistical classification with principal components is used for recognition. Further accuracy is achieved by employing spatial consistency constraints. When a slanted texture is projected on the image plane, the patterns undergo systematic changes in the density, area, and directionality of the texture elements. Recognition is made invariant to such transformations by representing texture classes with multiple descriptors. These descriptors are computed from carefully selected 3-D views of the patterns. Simulated projection of textures from arbitrary viewpoints are obtained by using a new texture mapping algorithm. The segmentation algorithm overcomes the non-stationarity of the features by employing a new, robust similarity measure. The performance of these methods is demonstrated by applying them to real images.
-
Ph.D. Thesis
1993
Automating Physical Database Design: An Extensible Approach
Rozen, Steven
Abstract
|
PDF
Title: Automating Physical Database Design: An Extensible Approach
Candidate: Rozen, Steven
Advisor(s): Shasha, Dennis
Abstract:
In a high-level query language such as SQL, queries yield the same result no matter how the logical schema is physically implemented. Nevertheless, a query's cost can vary by orders of magnitude among different physical implementations of the same logical schema, even with the most modern query optimizers. Therefore, designing a low-cost physical implementation is an important pragmatic problem-one that requires a sophisticated understanding of physical design options and query strategies, and that involves estimating query costs, a tedious and error-prone process when done manually.
We have devised a simple framework for automating physical design in relational or post-relational DBMSs and in database programming languages. Within this framework, design options are uniformly represented as ``features'', and designs are represented by ``conflict''-free sets of features. (Mutually exclusive features conflict. An example would be two primary indexes on the same table.) The uniform representation of design options as features accommodates a greater variety of design options than previous approaches; adding a new design option (e.g. a new index type) merely entails characterizing it as a feature with appropriate parameters. We propose an approximation algorithm, based on this framework, that finds low-cost physical designs. In an initial phase, the algorithm examines the logical schema, data statistics, and queries, and generates ``useful features''-features that might reduce query costs. In a subsequent phase, the algorithm uses the DBMS's cost estimates to find ``best features''-features that belong to the lowest-cost designs for each individual query. Finally, the algorithm searches among conflict-free subsets of the best features of all the queries to find organizations with low global cost estimates. We have implemented a prototype physical design assistant for the INGRES relational DBMS, and we evaluate its designs for several benchmarks, including ASSSAP. Our experiments with the prototype show that it can produce good designs, and that the critical factor limiting their quality is the accuracy of query cost estimates. The prototype implementation isolates dependencies on INGRES, permitting our framework to produce design assistants for a wide range of relational, nested-relational, and object-oriented DBMSs.
-
TR1993-629
1993
Two-Level Schwarz Methods for Nonconforming Finite Elements and Discontinuous Coefficients
Sarkis, M.
Abstract
|
PDF
Title: Two-Level Schwarz Methods for Nonconforming Finite Elements and Discontinuous Coefficients
Author(s): Sarkis, M.
Abstract:
A two-level domain decomposition methods are developed for a simple nonconforming approximation of second order elliptic problems. A bound is established for the condition number of these iterative methods, which grows only logarithmically with the number of degrees of freedom in each subregion. This bound holds for two and three dimensions and is independent of jumps in the value of the coefficients.
-
TR1993-640
1993
A Probabilistic Approach to Geometric Hashing Using Line Features
Tsai, F.
Abstract
|
PDF
Title: A Probabilistic Approach to Geometric Hashing Using Line Features
Author(s): Tsai, F.
Abstract:
Most current object recognition algorithms assume reliable image segmentation, which in practice is often not available. We examine the combination of the Hough Transform with a variation of Geometric Hashing as a technique for model-based object recognition in seriously degraded single intensity images. Prior work on the performance analysis of geometric hashing has focused on point features and shown its noise sensitivity. This paper uses line features to compute recognition invariants in a potentially more robust way. We investigate the statistical behavior of these line features analytically. Various viewing transformations, which 2-D (or flat 3-D) objects undergo during image formation, are considered. For the case of affine transformations, which are often suitable substitutes for more general perspective transformations, we show experimentally that the technique is noise resistant and can be used in highly occluded environments.
-
TR1993-625
1993
Using Line Invariants for Object Recognition by Geometric Hashing
Tsai, F.
Abstract
|
PDF
Title: Using Line Invariants for Object Recognition by Geometric Hashing
Author(s): Tsai, F.
Abstract:
Geometric Hashing is a model-based object recognition technique for detecting objects which can be partially overlapping or partly occluded. It precompiles, from local geometric features, redundant transformation-invariant information of the models in a hash table and uses the invariants computed from a scene for fast indexing into the hash table to hypothesize possible matches between object instances and object models during recognition.
In its simplest form, the geometric hashing method assumes relatively noise-free data and is applied to objects with points as local features. However, extracting of the locations of point features is inherently error-prone and the analysis of geometric hashing on point sets shows considerable noise sensitivity. Line features can generally be extracted with greater accuracy.
We investigate the use of line features for geometric hashing applied to 2-D (or flat 3-D) object recognition and derive, from a combination of line features, invariants for lines under various geometric transformations.
-
Ph.D. Thesis
1993
A Probabilistic Approach to Geometric Hashing using Line Features
Tsai, Frank
Abstract
|
PDF
Title: A Probabilistic Approach to Geometric Hashing using Line Features
Candidate: Tsai, Frank
Advisor(s): Schwartz, Jacob T.
Abstract:
One of the most important goals of computer vision research is object recognition. Most current object recognition algorithms assume reliable image segmentation, which in practice is often not available. This research exploits the combination of the Hough method with the geometric hashing technique for model-based object recognition in seriously degraded intensity images.
We describe the analysis, design and implementation of a recognition system that can recognize, in a seriously degraded intensity image, multiple objects modeled by a collection of lines.
We first examine the factors affecting line extraction by the Hough transform and proposed various techniques to cope with them. Line features are then used as primitive features from which we compute the geometric invariants used by the geometric hashing technique. Various geometric transformations, including rigid, similarity, affine and projective transformations, are examined. We then derive the ``spread'' of computed invariant over the hash space caused by ``perturbation'' of the lines giving rise to this invariant. This is the first of its kind for noise analysis on line features for geometric hashing. The result of the noise analysis is then used in a weighted voting scheme for the geometric hashing technique. We have implemented the system described and carried out a series of experiments on polygonal objects modeled by lines, assuming affine approximations to perspective viewing transformations. Our experimental results show that the technique described is noise resistant and suitable in an environment containing many occlusions.
-
TR1993-634
1993
Miniature Direct-Drive Rotary Actuators
Wallace, R.
Abstract
|
PDF
Title: Miniature Direct-Drive Rotary Actuators
Author(s): Wallace, R.
Abstract:
This paper reports the development of direct drive DC motor actuators for miniature robots. The motors are based on Nd-Fe-B rare earth permanent magnets and controlled by low cost microcontrollers. The motors have low friction, small size, high speed, low construction cost, no gear backlash, operate safely without limit switches, have limited self-braking, and generate moderate torque. Significantly, one motor can generate enough torque to lift a second motor of about the same size against the force of gravity, at a distance approximately equal to the size of the motor, without resorting to the use of a counterweight. We demonstrated the feasibility of using these actuators to make a two-link robot finger or leg.
-
TR1993-651
1993
Miniature Direct Drive Rotary Actuators II: Eye, Finger, and Leg
Wallace, R.
Abstract
|
PDF
Title: Miniature Direct Drive Rotary Actuators II: Eye, Finger, and Leg
Author(s): Wallace, R.
Abstract:
We have developed miniature direct drive DC motor actuators for robotics. These actuators have low friction, small size, high speed, low construction cost, no gear backlash, operate safely without the use of limit switches and generate moderate torque at a high torque to weight ratio. Our initial experiments indicated the feasibility of constructing a variety of new high speed low cost actuators, for applications in camera pointing, robot hands, and robot legs. In this work we study some prototype devices in each of these categories.
-
TR1993-633
1993
Space Variant Image Processing
Wallace, R.;
Ong, P.; Bederson, B.; Schwartz, E.
Abstract
|
PDF
Title: Space Variant Image Processing
Author(s): Wallace, R.; Ong, P.; Bederson, B.; Schwartz, E.
Abstract:
This paper describes a graph-based approach to image processing, intended for use with images obtained from sensors having space variant sampling grids. The connectivity graph (CG) is presented as a fundamental framework for posing image operations in any kind of space variant sensor. Partially motivated by the observation that human vision is strongly space variant, a number of research groups have been experimenting with space variant sensors. Such systems cover wide solid angles yet maintain high acuity in their central regions. Implementation of space variant systems pose at least two outstanding problems. First, such a system must be active, in order to utilize its high acuity region; second, there are significant image processing problems introduced by the non-uniform pixel size, shape and connectivity. Familiar image processing operations such as connected components, convolution, template matching, and even image translation, take on new and different forms when defined on space variant images. The present paper provides a general method for space variant image processing, based on a connectivity graph which represents the neighbor-relations in an arbitrarily structured sensor. We illustrate this approach with the following applications: (1) Connected components is reduced to its graph theoretic counterpart. We illustrate this on a logmap sensor, which possesses a difficult topology due to the branch cut associated with the complex logarithm function. (2) We show how to write local image operators in the connectivity graph that are independent of the sensor geometry. (3) We relate the connectivity graph to pyramids over irregular tessalations, and implement a local binarization operator in a 2-level pyramid. (4) Finally, we expand the connectivity graph into a structure we call a transformation graph, which represents the effects of geometric transformations in space variant image sensors. Using the transformation graph, we define an efficient algorithm for matching in the logmap images and solve the template matching problem for space variant images. Because of the very small number of pixels typical of logarithmic structured space variant arrays, the connectivity graph approach to image processing is suitable for real-time implementation, and provides a generic solution to a wide range of image processing applications with space variant sensors.
-
TR1993-636
1993
Voice-Bandwidth Visual Communication Through Logmaps: The Telecortex
Wallace, R.;
Bederson, B.; Schwartz, E.
Abstract
|
PDF
Title: Voice-Bandwidth Visual Communication Through Logmaps: The Telecortex
Author(s): Wallace, R.; Bederson, B.; Schwartz, E.
Abstract:
We present a robotic video telephone application of the Cortex-1 miniaturized space-variant active vision system. The embedded processor architecture of Cortex-1 enables it to implement a variety of functions not found in conventional video telephones, for example the camera tracks moving users with its pantilt mechanism. We also report an analog channel coding scheme to transmit logmap video images through band-limited analog channels such as the public switched telephone network (PSTN). The transmitter divides the voice frequency band into 768 channels, and modulates two values in quadrature on each channel. Some channels are reserved for special calibration signals enabling the receiver to recover both the phase and magnitude of the transmitted signal. The remaining channels carry pixel intensities. We synthesize the signal in the frequency domain and run the FFT algorithm to implement a fast conversion to a real, time-domain signal. A phase-lock loop keeps the receiver frame-synchronized with the transmitter. We constructed an experimental video telephone that sends 1376 pixel logmap images at 3.9 frames per second through the PSTN. Using the analog channel coding scheme, we achieve an effective data transfer rate in excess of 40000 bits per second.
-
TR1992-610
1992
HESFCN - A Fortran package of Hessian Subroutines for Testing Nonlinear Optimization Software
Averbukh, V.;
Figueroa, S.; Schlick, T.
Abstract
|
PDF
Title: HESFCN - A Fortran package of Hessian Subroutines for Testing Nonlinear Optimization Software
Author(s): Averbukh, V.; Figueroa, S.; Schlick, T.
Abstract:
We report the development of Hessian FORTRAN routines for testing unconstrained nonlinear optimization. An existing package, "Algorithm 566" (J. Mor'e, B. G. Garbow, and K. Hillstrom, ACM Trans. Math. Softw. 7, 14-41 and 136-140, 1981) provides function and gradient subroutines of 18 test functions for multivariate minimization. Our supplementary Hessian segments will enable users to test optimization software that requires second derivative information. Eigenvalue analysis throughout the minimization will also be possible in the goal of better understanding minimization progress by different algorithms and the relation of progress to eigenvalue distribution and condition number.
-
Ph.D. Thesis
1992
A Miniature Space-Variant Active Vision System: Cortex-I
Bederson, Benjamin
Abstract
|
PDF
Title: A Miniature Space-Variant Active Vision System: Cortex-I
Candidate: Bederson, Benjamin
Advisor(s): Schwartz, Eric
Abstract:
We have developed a prototype miniaturized active vision system whose sensor architecture is based on a logarithmically structured space-variant pixel geometry. A space-variant image's resolution changes across the image. Typically, the central part of the image has a very high resolution, and the resolution falls off gradually in the periphery. Our system integrates a miniature CCD-based camera, pan-tilt actuator, controller, general purpose processors and display. Due to the ability of space-variant sensors to cover large work-spaces, yet provide high acuity with an extremely small number of pixels, space-variant active vision system architectures provide the potential for radical reductions in system size and cost. We have realized this by creating an entire system that takes up less than a third of a cubic foot. In this thesis, we describe a prototype space-variant active vision system (Cortex-I) which performs such tasks as tracking moving objects and license plate reading, and functions as a video telephone.
We report on the design and construction of the camera (which is 8x8x8mm), its readout, and a fast mapping algorithm to convert the uniform image to a space-variant image. We introduce a new miniature pan-tilt actuator, the Spherical Pointing Motor (SPM), which is 4x5x6cm. The basic idea behind the SPM is to orient a permanent magnet to the magnetic field induced by three orthogonal coils by applying the appropriate ratio of currents to the coils. Finally, we present results of integrating the system with several applications. Potential application domains for systems of this type include vision systems for mobile robots and robot manipulators, traffic monitoring systems, security and surveillance, telerobotics, and consumer video communications.
The long-range goal of this project is to demonstrate that major new applications of robotics will become feasible when small low-cost machine vision systems can be mass-produced. This notion of `commodity robotics' is expected to parallel the impact of the personal computer, in the sense of opening up new application niches for what has until now been expensive and therefore limited technology.
-
TR1992-602
1992
An $O(m$log$n)$-Time Algorithm for the Maximal Planar Subgraph Problem
Cai, J.;
Han, X.; Tarjan, R.
Abstract
|
PDF
Title: An $O(m$log$n)$-Time Algorithm for the Maximal Planar Subgraph Problem
Author(s): Cai, J.; Han, X.; Tarjan, R.
Abstract:
Based on a new version of Hopcroft and Tarjan's planarity testing algorithm, we develop an O(mlogn)-time algorithm to find a maximal planar subgraph.
-
TR1992-604
1992
More Efficient Bottom-Up Mult-Pattern Matching in Trees
Cai, J.;
Paige, R.; Tarjan, R.
Abstract
|
PDF
Title: More Efficient Bottom-Up Mult-Pattern Matching in Trees
Author(s): Cai, J.; Paige, R.; Tarjan, R.
Abstract:
Pattern matching in trees is fundamental to a variety of programming language systems. However, progress has been slow in satisfying a pressing need for general purpose pattern matching algorithms that are efficient in both time and space. We offer asymptotic improvements in both time and space to Chase's bottom-up algorithm for pattern preprocessing. A preliminary implementation of our algorithm runs ten times faster than Chase's implementation on the hardest problem instances. Our preprocessing algorithm has the advantage of being on-line with respect to pattern additions and deletions. It also adapts to favorable input instances, and on Hoffmann and O'Donnell's class of Simple Patterns, it performs better than their special purpose algorithm tailored to this class. We show how to modify our algorithm using a new decomposition method to obtain a space/time tradeoff. Finally, we trade a log factor in time for a linear space bottom-up pattern matching algorithm that handles a wide subclass of Hoffmann and O'Donnell's Simple Patterns.
- TR1992-609 1992 Multiset Discrimination - A Method for Implementing Programming Language Systems without Hashing Cai, J.; Paige, R. Abstract | PDF
-
TR1992-603
1992
Counting Embeddings of Planar Graphs Using DFS Trees
Cai, Jiazhen
Abstract
|
PDF
Title: Counting Embeddings of Planar Graphs Using DFS Trees
Author(s): Cai, Jiazhen
Abstract:
Previously counting embeddings of planar graphs used P-Q trees and was restricted to biconnected graphs. Although the P-Q tree approach is conceptually simple, its implementation is complicated. In this paper we solve this problem using DFS trees, which are easy to implement. We also give formulas that count the number of embeddings of general planar graphs (not necessarily connected or biconnected) in O (n) arithmetic steps, where n is the number of vertices of the input graph. Finally, our algorithm can be extended to generate all embeddings of a planar graph in linear time with respect to the output.
-
TR1992-595
1992
Multiplicative Schwarz Algorithms for Some Nonsymmetric and Indefinite Problems
Cai, X.-C.;
Widlund, O.
Abstract
|
PDF
Title: Multiplicative Schwarz Algorithms for Some Nonsymmetric and Indefinite Problems
Author(s): Cai, X.-C.; Widlund, O.
Abstract:
The classical Schwarz alternating method has recently been generalized in several directions. This effort has resulted in a number of new powerful domain decomposition methods for elliptic problems, in new insight into multigrid methods and in the development of a very useful framework for the analysis of a variety of iterative methods. Most of this work has focused on positive definite, symmetric problems. In this paper a general framework is developed for multiplicative Schwarz algorithms for nonsymmetric and indefinite problems. Several applications are then discussed including two- and multi-level Schwarz methods and iterative substructuring algorithms. Some new results on additive Schwarz methods are also presented.
-
Ph.D. Thesis
1992
Regular Expressions to DFA's using Compressed NFA's
Chang, Chia-Hsiang
Abstract
|
PDF
Title: Regular Expressions to DFA's using Compressed NFA's
Candidate: Chang, Chia-Hsiang
Advisor(s): Paige, Robert
Abstract:
We show how to turn a regular expression R of length r into an O ( s ) space representation of McNaughton and Yamada's NFA, where s is the number of occurrences of alphabet symbols in R , and s +1is the number of NFA states. The standard adjacency list representation of McNaughton and Yamada's NFA takes up 1 + 2 s + s 2 space in the worst case. The adjacency list representation of the NFA produced by Thompson takes up between 2 r and 6 r space, where r can be arbitrarily larger than s . Given any subset V of states in McNaughton and Yamada's NFA, our representation can be used to compute the set U of states one transition away from the states in V in optimal time O (| V | + | U |). McNaughton and Yamada's NFA requires
$\Theta(|V| \times |U|)$
time in the worst case. Using Thompson's NFA, the equivalent calculation requires$\Theta(r)$
time in the worst case.An implementation of our NFA representation confirms that it takes up an order of magnitude less space than McNaughton and Yamada's machine. An implementation to produce a DFA from our NFA representation by subset construction shows linear and quadratic speedups over subset construction starting from both Thompson's and McNaughton and Yamada's NFA's.
It also shows that the DFA produced from our NFA is as much as one order of magnitude smaller than DFA's constructed from the two other NFA's.
An UNIX egrep compatible software called cgrep based on our NFA representation is implemented. A benchmark shows that cgrep is dramatically faster than both UNIX egrep and GNU e?grep.
Throughout this thesis the importance of syntax is stressed in the design of our algorithms. In particular, we exploit a method of program improvement in which costly repeated calculations can be avoided by establishing and maintaining program invariants. This method of symbolic finite differencing has been used previously by Douglas Smith to derive efficient functional programs.
-
TR1992-620
1992
Backward Analysis for Higher-Order Functions Using Inverse Images
Chuang, T.-R.;
Goldberg, B.
Abstract
|
PDF
Title: Backward Analysis for Higher-Order Functions Using Inverse Images
Author(s): Chuang, T.-R.; Goldberg, B.
Abstract:
We propose a method for performing backward analysis on higher-order functional programming languages based on computing inverse images of functions over abstract domains. This method can be viewed as abstract interpretation done backward. Given an abstract semantics which supports forward analysis, we can transform it into an abstract semantics which performs backward analysis. We show that if the original abstract semantics is correct and computable, then the transformed version of the abstract semantics is also correct and computable.
More specifically, given a forward abstract semantics of a higher-order functional language which is expressed in terms of Scott-closed powerdomains, we derive an backward abstraction semantics which is expressed in terms of Scott-open powerdomains. The derivation is shown to be correct and the relationships between forward analysis and backward analysis is established. We apply this method to the classic strictness analysis in functional languages and obtain promising results. We show that the time complexity of inverse image based backward analysis is not much worse than the forward analysis from which it is derived. We then compare this work with previous works of Wadler and Hughes (1987), Hughes (1988), and Burn (1990), showing that some special restrictions and constructions in previous works have natural interpretation in the Scott-closed/Scott-open powerdomain framework. A brief outline of applying the inverse image method to other backward semantics analysis is also given.
-
TR1992-606
1992
Domain Decomposition Algorithms with Small Overlap
Dryja, M.;
Widlund, O.
Abstract
|
PDF
Title: Domain Decomposition Algorithms with Small Overlap
Author(s): Dryja, M.; Widlund, O.
Abstract:
Numerical experiments have shown that two-level Schwarz methods often perform very well even if the overlap between neighboring subregions is quite small. This is true to an even greater extent for a related algorithm, due to Barry Smith, where a Schwarz algorithm is applied to the reduced linear system of equations that remains after that the variables interior to the subregions have been eliminated. In this paper, a supporting theory is developed.
-
TR1992-615
1992
Some Recent Results on Schwarz Type Domain Decomposition Algorithms
Dryja, M.;
Widlund, O.
Abstract
|
PDF
Title: Some Recent Results on Schwarz Type Domain Decomposition Algorithms
Author(s): Dryja, M.; Widlund, O.
Abstract:
Numerical experiments have shown that two-level Schwarz methods, for the solution of discrete elliptic problems, often perform very well even if the overlap between neighboring subregions is quite small. This is true to an even greater extent for a related algorithm, due to Barry Smith, where a Schwarz algorithm is applied to the reduced linear system of equations that remains after that the variables interior to the subregions have been eliminated. A supporting theory is outlined.
-
Ph.D. Thesis
1992
Complexity Issues in Computational Algebra
Gallo, Giovanni
Abstract
|
PDF
Title: Complexity Issues in Computational Algebra
Candidate: Gallo, Giovanni
Advisor(s): Mishra, Bud
Abstract:
The ideal membership problem for the ring of multivariate polynomials is a central problem in Computational Algebra. Relatively tight computational complexity bounds for this problem are known in the case of polynomials with coefficients in a field. After reviewing these results we give an algorithm, together with an upper bound on its complexity, for the solution of the membership problem in the case of polynomials with integer coefficients. This result is obtained adapting Buchberger's algorithm to the integer case. As an application, we also provide a more general upper bound on the length of strictly ascending chain of ideals in the ring
$Z[x_1,\ldots,x_n]$
.Many applications of Computational Algebra, however, do not require the complete solution of the membership problems and alternative approaches are available. In this thesis we survey the method of the characteristic sets, originally introduced by Ritt in the forties and now widely applied, particularly in Elementary Geometry Theorem Proving. We present optimal algorithms for computing a characteristic set with simple-exponential sequential and polynomial parallel time complexities.
We finally present an attempt to generalize some of the constructive methods of Commutative Algebra to the Algebra of differential polynomials. Rings of differential polynomials do not resemble their purely algebraic counterparts: we prove that there exist non-recursive differential ideals in
$Z\{x\}$
and hence that, in general, no effective method can be devised to solve the membership problem in this case. However special classes of ideals can be algorithmically approached and toward this goal, we generalize the concept of H-basis, first introduced by Macaulay for algebraic ideals, to differential rings. -
TR1992-607
1992
GMRES/CR and Arnoldi/Lanczos as Matrix Approximation Problems
Greenbaum, A.;
Trefethen, L.
Abstract
|
PDF
Title: GMRES/CR and Arnoldi/Lanczos as Matrix Approximation Problems
Author(s): Greenbaum, A.; Trefethen, L.
Abstract:
The GMRES and Arnoldi algorithms, which reduce to the CR and Lanczos algorithms in the symmetric case, both minimize kp(A)bk over polynomials p of degree n. The difference is that p is normalized at z = 0 for GMRES and at z = inf for Arnoldi. Analogous "ideal GMRES" and "ideal Arnoldi" problems are obtained if one removes b from the discussion and minimizes kp(A)k instead. Investigation of these true and ideal approximation problems gives insight into how fast GMRES converges and how the Arnoldi iteration locates eigenvalues.
- TR1992-608 1992 Matrices that Generate the Same Krylov Residual Spaces Greenbaum, A.; Strakos, Z. Abstract | PDF
-
Ph.D. Thesis
1992
Typing Higher-Order Functions with Dynamic Dispatching
Hsieh, Chih-Hung
Abstract
|
PDF
Title: Typing Higher-Order Functions with Dynamic Dispatching
Candidate: Hsieh, Chih-Hung
Advisor(s): Harrison, Malcolm C.
Abstract:
We design new type expressions and algorithms to classify and check object types in higher-order programming. Our computation model is imperative and strongly typed. It has dynamic-dispatched functions, higher-order bounded polymorphic functions, record and function subtyping, parameterized types, both named and structural types, free-union types, existential union types, poly-typed variables, poly-typed expressions, and heterogeneous collections.
A prototype of a mini-language with the above features is implemented in Prolog with a type checking system. A small but powerful set of typing structures and operations is identified. The type checking rules are formally defined. A new technique is developed for translating recursive type relations into cyclic AND/OR graphs. Efficient algorithms are designed for resolving generalized AND/OR graphs with constraints on valid cycles.
Using elegant syntax the new type system describes more general and precise type relations than any other type systems we have known. The new technique for translating type relations into AND/OR graphs provides a new direction for implementing a higher-order polymorphic type system, which is not available in unification-based type systems. The AND/OR graph models are general enough to represent recursive relations, and their applications are not solely limited to type-checking. Our AND/OR graph resolution algorithms find the optimal solutions. They are theoretically proved efficient and are shown practical in our implementation.
-
Ph.D. Thesis
1992
Computer Simulation of Cortical Polymaps
Landau, Pierre
Abstract
|
PDF
Title: Computer Simulation of Cortical Polymaps
Candidate: Landau, Pierre
Advisor(s): Schwartz, Eric
Abstract:
Neo-cortical sensory areas of the vertebrate brain are organized in terms of topographic maps of peripheral sense-organs. Cortical topography has been generally modeled in terms of a continuous map of a peripheral sensory surface onto a cortical surface. However, the details of cortical architecture do not conform to this concept. Most, if not all, cortical areas consist of an interlaced structure containing multiple topographic maps of distinct classes of neural input. The term ``polymap'' is used to refer to a cortical area which consists of more than one system, interlaced in a globally topographic, but locally columnar fashion. The best known example of a cortical polymap is provided by the ocular dominance column system in layer IV of primate striate cortex, but the puff/extra-puff and orientation systems of surrounding layers also illustrate this concept, as do the thick-thin-interstripe columns of V-2, and the direction columns of MT. Since polymap architecture seems to be a common architectural pattern in the neo-cortex, this work addresses the computational modeling of polymap systems, with the expectation that such modeling will lead to a better understanding of the underlying biology. An algorithm is presented, based on the computational geometry constructs of Generalized Voronoi Polygon and Medial Axis, which provides a general method for simulating polymap systems. It also adds a powerful technique to the repertoire of Digital Image Warping. The algorithm is illustrated using the ocular dominance column and orientation column systems of V-1. In addition, a mechanism is proposed and demonstrated to account for the spatial registration of the ocular dominance and orientation column systems. Computer simulations of the activity evoked by binocular stimuli, as they would appear at the level of layers III and IV in V-1, are shown, and compared to results from recent experiments. Methods of generalizing these techniques to other common polymap cortical areas are outlined.
-
Ph.D. Thesis
1992
Polymorphic Type Inference and Abstract Data Types
Laufer, Konstantin
Abstract
|
PDF
Title: Polymorphic Type Inference and Abstract Data Types
Candidate: Laufer, Konstantin
Advisor(s): Goldberg, Benjamin; Odersky, Martin (Yale)
Abstract:
Many statically-typed programming languages provide an abstract data type construct, such as the package in Ada, the cluster in CLU, and the module in Modula2. However, in most of these languages, instances of abstract data types are not first-class values. Thus they cannot be assigned to a variable, passed as a function parameter or returned as a function result.
The higher-order functional language ML has a strong and static type system with parametric polymorphism. In addition, ML provides type reconstruction and consequently does not require type declarations for identifiers. Although the ML module system supports abstract data types, their instances cannot be used as first-class values for type-theoretic reasons.
In this dissertation, we describe a family of extensions of ML. While retaining ML's static type discipline, type reconstruction, and most of its syntax, we add significant expressive power to the language by incorporating first-class abstract types as an extension of ML's free algebraic datatypes. In particular, we are now able to express
- multiple implementations of a given abstract type,
- heterogeneous aggregates of different implementations of the same abstract type, and
- dynamic dispatching of operations with respect to the implementation type.
Following Mitchell and Plotkin, we formalize abstract types in terms of existentially quantified types. We prove that our type system is semantically sound with respect to a standard denotational semantics.
We then present an extension of Haskell, a non-strict functional language that uses type classes to capture systematic overloading. This language results from incorporating existentially quantified types into Haskell and gives us first-class abstract types with type classes as their interfaces. We can now express heterogeneous structures over type classes. The language is statically typed and offers comparable flexibility to object-oriented languages. Its semantics is defined through a type-preserving translation to a modified version of our ML extension.
We have implemented a prototype of an interpreter for our language, including the type reconstruction algorithm, in Standard ML.
-
Ph.D. Thesis
1992
A sublanguage based medical language processing system for German
Oliver, Neil
Abstract
|
PDF
Title: A sublanguage based medical language processing system for German
Candidate: Oliver, Neil
Advisor(s): Sager, Naomi
Abstract:
The major accomplishments reported in this thesis are:
- The development of a computer grammar for a nontrivial sublanguage of German. This grammar, using the LSP (Linguistic String Processor) grammar formalism, solves a number of parsing problems arising in free word order languages such as German.
- The development of an LSP-based information formatting system that obtains semantic representations of texts in a medical sublanguage of German.
- The confirmation of the sublanguage hypothesis (explained below).
In LSP grammar theory, sentences in a language are derived from a collection of basic sentence types. The basic sentence types are described in terms of the major syntactic classes (e.g., noun, verb, adjective) of the language. Sentences are derived from these basic sentences by the insertion of optional structures called adjuncts, by conjoining, and by substituting words in the major classes. Insertion, conjoining, and substitution are constrained by co-occurrence restrictions between elements in the derived syntactic structures. The restrictions subcategorize the major word classes into subclasses that may co-occur in sentences according to the co-occurrence restrictions.
The sublanguage hypothesis elaborates LSP grammar theory in the following way. In a particular domain of discourse, the subcategorization of the major word classes reflects the underlying semantics of the domain. The basic sentence types of the language, represented by sublanguage subclasses instead of major word classes, can function as data structures (called information formats ) representing the information of the domain.
The LSP Medical Language Processor (LSP/MLP) is an information retrieval/information extraction system based on sublanguage and information formatting. It processes sentences in the English sublanguage of clinical reporting into information formats, which are in turn are converted into database update records for a relational database. The information formats are derived from sublanguage co-occurrence information obtained from a corpus of discharge summaries.
The German information formatting system implemented in this work processes German Arztbriefe (doctor letters) of cancer surgery patients into information formats. It confirms the sublanguage hypothesis because it re-uses the sublanguage information (co-occurrence information and formats) of the English LSP/MLP system in an equivalent sublanguage, showing that the sublanguage information reflects the semantics of the domain.
-
Ph.D. Thesis
1992
Image Processing, Pattern Recognition and Attentional Algorithms in a Space-Variant Active Vision System
Ong, Ping-Wen
Abstract
|
PDF
Title: Image Processing, Pattern Recognition and Attentional Algorithms in a Space-Variant Active Vision System
Candidate: Ong, Ping-Wen
Advisor(s): Schwartz, Eric
Abstract:
A space-variant sensor motivated by human vision system has highest resolution at the center with rapidly decreasing resolution toward the peripheral area. It has the advantages of a wide visual field and, at the same time, high central resolution. The dramatic reduction of pixel number in this kind of sensor makes it possible to build a real-time vision system using only moderate computational resources. On the other hand, the space-variant image has different layout compared to a raster image. The neighbor relationships change from pixel to pixel. We need to device special method to solve this neighborhood problem.
We use a connectivity graph to represent neighbor relations between pixels in a space-variant image. We can use it to define operators for edge detection, smoothing, etc. We use a two-level pyramid based on the connectivity graph to perform local thresholding for segmentation. The translation, rotation and scaling graph are three extensions of the connectivity graph which are used to translate, rotate and scale space-variant images. We can use these graphs to perform scale and rotation independent template matching.
We successfully apply several feature designs for OCR in the space-variant domain. They include: Characteristic-Loci, Partition, Heat-Signature, and Projection. All of them are translation and scale invariant. We also have two rotation invariant methods based on Partion and Projection methods.
Since space-variant sensor has higher resolution at the center, the recognition result is more reliable if we point the sensor close to the candidate object. Therefore, if we want to recognize any single character, the center of this character is the best place for pointing the sensor. But for recognizing adjacent characters, except for well separated ones, we need to point the sensor to the place where we can separate these characters.
Based on this reliability analysis, we devised four attentional rules and an algorithm for moving sensors to recognize character strings in static natural scenes.
Finally, we describe the algorithms for reading characters from the license plate on a moving vehicle. It includes stages for traffic zone finding, moving car finding, license plate finding, license plate tracking, and character reading.
-
Ph.D. Thesis
1992
On Compiling Regular Loops for Efficient Parallel Execution
Ouyang, Pei
Abstract
|
PDF
Title: On Compiling Regular Loops for Efficient Parallel Execution
Candidate: Ouyang, Pei
Advisor(s): Kedem, Zvi; Palem, Krishna
Abstract:
In this thesis, we study the problem of mapping regular loops onto multiprocessors. We develop mapping schemes that yield very efficient executions of regular loops on shared and distributed memory architectures. We also develop novel analysis techniques, using which we argue about the efficiency of these resulting executions. The quality of the execution of these regular loops in the distributed memory setting, relies heavily on implementing cyclic shifts efficiently. Effectively, cyclic shifts are used to communicate results between individual processors, to which different interdependent iterations are assigned. Therefore, in order to achieve efficient executions of regular loops on distributed memory architectures, we also develop and analyze algorithms for solving the cyclic shift problem. In order to analyze the executions of regular loops that result from any specific mapping, we need to characterize the important parameters that determine its efficiency. We formally characterize a basic set of such parameters. These parameters facilitate the analysis of the memory and the processor requirements of a given execution, as well as its running time . Using these parameters, we analyze a greedy execution scheme, in the shared memory model. For example, we can determine the limit on the number of processors beyond which no speedup can be attained by the greedy method, for regular loops. The greedy scheme is of interest because it exploits the maximal possible parallelism in a natural way.
We then address the mapping scheme of regular loops onto distributed memory machines. Unfortunately, we show that the problem of finding an optimal mapping is computationally intractable in this case. In order to provide schemes that can be actually applied to regular loops at compile-time, we relax the requirement that the resulting executions be optimum. Instead, we design a heuristic mapping algorithm and validate it through experiments. This heuristic mapping scheme relies heavily on the use of efficient algorithms for realizing cyclic shifts. Therefore, we also study the problem of realizing cyclic shifts on hypercube architectures.
-
TR1992-597
1992
Semantic Analyses for Storage Management Optimizations in Functional Language Implementations
Park, G.
Abstract
|
PDF
Title: Semantic Analyses for Storage Management Optimizations in Functional Language Implementations
Author(s): Park, G.
Abstract:
One of the major overheads in implementing functional languages is the storage management overhead due to dynamic allocation and automatic reclamation of indefinite-extent storage. This dissertation investigates the problems of statically inferring lifetime information about dynamically-allocated objects in higher-order polymorphic functional languages, both strict and non-strict, and of applying that information to reduce the storage management overhead.
We have developed a set of compile-time semantic analyses for a higher-order, monomorphic, strict functional language based on denotational semantics and abstract interpretation. They are 1) escape analysis, which provides information about the relative lifetimes of objects such as arguments and local objects defined within a function with respect to an activation of the function call, 2) refined escape analysis which, as a refinement of escape analysis, provides information about the lifetimes of components of aggregate structures, and 3) reference escape analysis which provides information about the relative lifetimes of references created within a function with respect to an activation of the function.
We also have developed a compile-time semantic analysis called order-of-demand analysis for higher-order, monomorphic, non-strict functional languages, which provides information about the order in which the values of bound variables are demanded and thus allows one to compute a range of information including strictness, evaluation-order, and evaluationstatus information.
Using the notion of polymorphic invariance, we describe a method for analyzing a polymorphic language by using the analyses for a monomorphic language. We then extend those analyses for a strict language to a non-strict language using non-strict program transformation and evaluation-status information.
Based on statically inferred escape information, we propose a combination of storage management optimization techniques including stack allocation, explicit reclamation, in-place reuse, reference counting elimination, block allocation/reclamation, and improving generational garbage collection.
-
TR1992-616
1992
Domain Decomposition Algorithms for the P-Version Finite Element Method for Elliptic Problems
Pavarino, L.
Abstract
|
PDF
Title: Domain Decomposition Algorithms for the P-Version Finite Element Method for Elliptic Problems
Author(s): Pavarino, L.
Abstract:
Domain decomposition algorithms based on the Schwarz framework were originally proposed for the h-version finite element method for elliptic problems. In this thesis, we study some Schwarz algorithms for the p-version finite element method, in which increased accuracy is achieved by increasing the degree p of the elements while the mesh is fixed. These iterative algorithms, often of conjugate gradient type, are both parallel and scalable, and therefore very well suited for massively parallel computing.
We consider linear, scalar, self adjoint, second order elliptic problems and quadrilateral elements in the finite element discretization. For a class of overlapping methods, we prove a constant bound, independent of the degree p, the mesh size H and the number of elements N , for the condition number of the iteration operator. This optimal result holds in two and three dimensions for additive and multiplicative schemes, as well as variants on the interface.
We consider then local refinement for the same class of overlapping methods in two dimensions. Optimal bounds are obtained under certain hypothesis on the choice of refinement points, while in general almost optimal bounds with logarithmic growth in p are obtained. In the analysis of these local refinement methods, we prove some results of independent interest, such as a polynomial discrete Sobolev inequality and a bounded decomposition of discrete harmonic polynomials.
Iterative substructuring methods in two dimensions are also considered. We use the additive Schwarz framework to prove almost optimal bounds as in the h-version finite element method.
Results of numerical experiments, confirming the theoretical results, are conducted in two dimensions for model problems.
-
TR1992-614
1992
Some Schwarz Algorithms for the P-Version Finite Element Method
Pavarino, L.
Abstract
|
PDF
Title: Some Schwarz Algorithms for the P-Version Finite Element Method
Author(s): Pavarino, L.
Abstract:
Domain decomposition methods based on the Schwarz framework were originally proposed for the h-version finite element method for elliptic problems. In this paper, we consider instead the p-version, in which increased accuracy is achieved by increasing the degree of the elements while the mesh is fixed. We consider linear, scalar, self adjoint, second order elliptic problems and quadrilateral elements in the finite element discretization. For a class of overlapping additive Schwarz methods, we prove a constant bound, independent of the degree p and the number of elements N , for the condition number of the iteration operator. This optimal result holds in two and three dimensions for additive and multiplicative schemes, as well as variants on the interface. We then study local refinement for the same class of overlapping methods in two dimensions. A constant bound holds under certain hypotheses on the refinement region, while in general an almost optimal bound with logarithmic growth in p is obtained.
-
Ph.D. Thesis
1992
Japanese/English Machine Translation Using Sublanguage Patterns and Reversible Grammars
Peng, Ping
Abstract
|
PDF
Title: Japanese/English Machine Translation Using Sublanguage Patterns and Reversible Grammars
Candidate: Peng, Ping
Abstract:
For this thesis, a Japanese/English machine translation system with reversible components has been designed and implemented in PROLOG. Sublanguage co-occurrence patterns have been used to address the problems of lexical and structural selection in the transfer between the internal representations of a pair of natural languages. The system has been tested translating Japanese into English in the domain of programming language manuals. The evaluation of the test outputs provides some assessment of the utility of the sublanguage approach as a method for the development and refinement of a machine translation system. The thesis also explores the roles that a reversible grammar would play in sharing linguistic knowledge between parsing and generation.
The system has been developed with the goal of using sublanguage word co-occurrence patterns to simplify the description of syntactic/semantic knowledge needed in both the transfer rules and the analysis of the source language. In particular, sublanguage co-occurrence patterns are introduced to provide semantic constraints and ellipsis recovery in parsing Japanese.
This thesis introduces a right-to-left parsing scheme for Japanese. The idea for the right-to-left parsing algorithm evolved from the desire to produce partial syntactic analyses of Japanese in a more deterministic manner than was achieved by conventional left-to-right parsing schemes. The algorithm makes efficient use of sublanguage co-occurrence patterns as semantic knowledge to help disambiguate Japanese parses. The enforcement of syntactic and semantic constraints is tightly interwoven during the course of parsing. The performance in parsing Japanese has thereby been significantly enhanced.
A procedure has been implemented for translating a Definite Clause Grammar dually into a PROLOG parser and PROLOG generator, so that one grammar can be used for parsing and generation. In current natural language processing systems, separate grammars are used for parsing and generation. However, there has long been an interest in designing a single grammar for both parsing and synthesis for reasons of efficiency and integrity, as well as linguistic elegance and perspicuity. As part of the current implementation, a strategy has been developed for creating efficient grammars for both parsing and generation using a goal reordering technique within the logic programming framework.
-
Ph.D. Thesis
1992
The Analysis and Generation of Tests for Programming Language Translators
Rennels, Deborah
Abstract
|
PDF
Title: The Analysis and Generation of Tests for Programming Language Translators
Candidate: Rennels, Deborah
Advisor(s): Schonberg, Edmond
Abstract:
This thesis addresses the automation of two aspects of compiler validation testing: semantic analysis of existing test programs, and construction of new test programs. Semantic analysis is required during test modification and maintenance, and also when evaluating the language coverage attained by the test suite. In the current state of practice, both the semantic analysis and the construction of new test programs are extremely labor-intensive tasks; both, however, are amenable to automation. We describe two systems; one, which we have implemented, involves test case analysis and feature identification. The other is a proposed system for automatic generation of tests from test specifications. We tested our methods on the largest and most comprehensive compiler validation project to date- the Ada Compiler Validation Capability (ACVC), a large collection of Ada test programs used to verify that compilers conform to the Ada language standard.
We first describe the Ada Features Identification System (AFIS), a system which automates test program analysis. AFIS provides three different methods for identifying Ada language features in test programs, ranging from elementary syntactic items to complex context-sensitive combinations of semantic features. Semantic feature copmbinations are specified by writing program templates in a pattern language which is an extension of Ada, and pattern-matching these templates against test programs.
In the second part of this thesis we define a language to facilitate the specification of Ada compiler test objectives, and the design of a system that uses these specifications to automatically generate valid Ada test programs. The language allows a test developer to write a specification that embodies the testing goal of a given objective, without including all type and expression information required in a complete test program. These details are supplied automatically by the generator system. We show, by numerous examples taken from the Ada Implementors Guide (the design document for the Ada validation suite), how Ada test objectives can be specified in this language. The focus of our examples is constraint violation checking, which is an important component of Ada's strong typing system, and also a basic organizing principle of the ACVC tests.
-
Ph.D. Thesis
1992
Massively Parallel Bayesian Object Recognition
Rigoutsos, Isidore
Abstract
|
PDF
Title: Massively Parallel Bayesian Object Recognition
Candidate: Rigoutsos, Isidore
Advisor(s): Hummel, Robert
Abstract:
The problem of model-based object recognition is a fundamental one in the field of computer vision, and represents a promising direction for practical applications. In this talk we will describe the design, analysis, implementation and testing of a model-based object recognition system.
In the first part of the talk, we will discuss two parallel algorithms for performing geometric hashing. The first algorithm regards geometric hashing as a connectionist algorithm with information flowing via patterns of communication, and is designed for an SIMD hypercube-based machine. The second algorithm is more general, and treats the parallel architecture as a source of ``intelligent memory;'' the algorithm achieves parallelism through broadcast facilities from the parallel machine's host. A number of enhancements to the geometric hashing method, such as hash table equalization, the use of hash table symmetries, and hash table foldings will also be presented. These enhancements were developed specifically for the parallel algorithms, and lead to substantial performance improvements.
In the second part of the talk, we will examine the performance of geometric hashing methods in the presence of noise. The quantization of the invariants can result in a non-graceful degradation of the performance. We will present precise formulas as well as first-order approximations describing the dependency of the computed invariants on Gaussian positional error, for the similarity and affine transformation cases. Knowledge of this dependency allows the incorporation of an error model in the geometric hashing framework and subsequently leads to improved performance. A counter-intuitive result regarding the solutions of certain linear systems will also be derived as a corollary of this analysis.
In the final part of the talk, we will present an interpretation of geometric hashing that allows the algorithm to be viewed as a Bayesian approach to model-based object recognition. This interpretation, which is a new form of Bayesian-based model matching, leads to well-justified formulas, and gives a precise weighted-voting method for the evidence-gathering phase of geometric hashing. These formulas replace traditional heuristically-derived methods for performing weighted voting, and also provide a precise method for evaluating uncertainty.
A prototype object recognition system using these ideas has been implemented on a CM-2 Connection Machine. The system is scalable and can recognize aircraft and automobile models subjected to 2D rotation, translation, and scale changes in real-world digital imagery. This is the first system of its kind that is scalable, uses large databases, can handle noisy input data, works rapidly on an existing parallel architecture, and exhibits excellent performance with real-world, natural scenes.
-
Ph.D. Thesis
1992
Control of a Dexterous Robot Hand: Theory, Implementation, and Experiments
Silver, Naomi
Abstract
|
PDF
Title: Control of a Dexterous Robot Hand: Theory, Implementation, and Experiments
Candidate: Silver, Naomi
Advisor(s): Mishra, Bud
Abstract:
Advanced robotic systems, such as multi-fingered hands are becoming more complex, and, as yet, many of the basic questions involved remain unanswered. What control law should we use? What constitutes a good control law? How should we describe motions? What constitutes a broad, yet efficient description of motions for a grasped object?
In addition to the complexity, robotic systems frequently undergo upgrades, and it is therefore necessary to design the system in an unorganized manner. This includes such things as hierarchical software, system independent descriptions of motion, system independent control laws.
In this thesis, we address these issues for a less complex system. We have focused our attention on describing motions for objects being grasped by a multi-fingered hand. We present a formulation for motion primitives, which allow object manipulation and require limited parameter specification. We have attempted to find a control law which will perform well under adverse conditions. We have built the system and tested it on the NYU Four Finger Manipulator which is a two dimensional hand. Even for this simplified problem, there remains a large degree of complexity and there are as yet no definitive solutions to these problems.
-
Ph.D. Thesis
1992
Executable Operational Semantics of Programming Languages
Siritzky, Brian
Abstract
|
PDF
Title: Executable Operational Semantics of Programming Languages
Candidate: Siritzky, Brian
Advisor(s): Dewar, Robert
Abstract:
Since the inception of computer languages there have been attempts to define programming languages formally. Several markedly different methodologies have been proposed to solve this problem. This thesis argues for Executable Operational Semantics (EOS) as a methodology for formal definition which has many fundamental advantages. The EOS methodology has, however, been broadly criticized. We show that the major objections against EOS are unfounded, and that executability is suitable and useful for many applications of formal definitions. The primary criticisms of EOS definitions- that they are hardware and architecture specific, that they are unable to describe concurrency and non-determinism, and that they overspecify implementation details-are countered by the demonstration that Ada/Ed, an executable definition of Ada developed at New York University, can avoid or overcome each problem. A description of the implementation of Ada's real arithmetic and representation specifications reveals hardware and architectural independence in an executable definition. Ada/Ed's model of Ada tasking demonstrates that concurrency can be defined within an executable framework, and we argue that an executable definition can describe all non-deterministic aspects of Ada. The problems of overspecificity can be alleviated by the appropriate choice of metalanguage and software techniques, and by suitable parameterization of the formal definition. Finally, we describe some general advantages of executable definitions. We present worked examples of questions to a formal definition of Ada, and establish that requiring a definition to be executable enhances rather than degrades its usability and credibility.
-
Ph.D. Thesis
1992
Non-Correcting Error Recovery For LR Parsers
Snyder, Kirk
Abstract
|
PDF
Title: Non-Correcting Error Recovery For LR Parsers
Candidate: Snyder, Kirk
Advisor(s): Schwartz, Jacob T.
Abstract:
In recent years much effort has been devoted to the automatic generation of parsers, with considerable success. The error-handling mechanisms of these parsers are still not completely satisfactory, however. Currently available techniques are either too slow for practical production compilers, or they leave open the possibility of many spurious diagnostic messages. This thesis presents a parsing technique designed to minimize the frequency of spurious or misleading diagnostic messages emitted by the compiler, without the efficiency cost of similarly robust parsers. The technique parses program text following a syntax error as a `suffix' in the programming language, reporting errors in invalid suffixes. The system achieves its high efficiency by accepting a superset of the suffixes of the language being parsed, but a sufficiently small superset that very few errors are undetected. The technique described has been fully implemented, and a number of experiments on typical syntax errors in various programming languages are presented. We describe our parsing system in detail and assess its strengths and weaknesses relative to other parsing systems.
-
Ph.D. Thesis
1992
Global Methods for Image Motion Analysis
Sundareswaran, V.
Abstract
|
PDF
Title: Global Methods for Image Motion Analysis
Candidate: Sundareswaran, V.
Advisor(s): Hummel, Robert
Abstract:
Processing motion information is an important problem in building automated vision systems. A moving sensor can obtain knowledge about the environmental layout, its own motion, and motion of objects in the scene by processing the temporal information in the imagery. We provide algorithms that can determine self-motion (or egomotion) by observing a sequence of images produced by a moving sensor in a rigid, stationary environment. The algorithms make use of optical flow information extracted from the sequence, and unlike most alternative methods, are global and robust to inaccuracies in the flow data.
Two algorithms are presented. Both algorithms assume that the first stage of visual motion analysis, the computation of an image vector flow field that describes the instantaneous motion of individual points, has been solved.
The first algorithm, the flow circulation algorithm, determines the rotational parameters using the curl of the flow field, which under many conditions is approximately a linear function. The coefficients of the linear function, which may be determined by simple regression, are the desired rotational parameters. Circulation values, defined to be contour integrals of the vector field on the image plane, may be used in place of curl values, resulting in robustness. The second algorithm determines the translational parameters of the motion. The inner product of the image vector flow field and a certain circular vector field gives rise to a scalar function that is of a particular quadratic polynomial form when the center of the circlular field is chosen appropriately. This correct choice of the center is related to the translational parameters and can be found by projecting the inner product function onto suitable subspaces determined by the quadratic polynomial form. Three different methods, of increasing complexity and accuracy, are developed. A fourth, fast but approximate method is also presented.
The algorithms are described, analyzed and experimental results are shown. The thesis contains mathematical observations that provide insight into the problem of motion analysis, and experimental observations that demonstrate the applicability of the algorithms.
-
TR1992-621
1992
Statistical Approach to Affine Invariant Matching with Line Features
Tsai, F.
Abstract
|
PDF
Title: Statistical Approach to Affine Invariant Matching with Line Features
Author(s): Tsai, F.
Abstract:
One of the most important goals of computer vision research is object recognition. Currently, most object recognition algorithms assume reliable quality of image segmentation, which in practice is often not the case. This report examines the combination of the Hough Transform with a variation of Geometric Hashing as a technique for model-based object recognition in seriously degraded single intensity images.
There is recently much focus on the performance analysis of geometric hashing. However, to our knowledge, all of them are focusing on applying the paradigm to point features and show that the technique is sensitive to noise. There is as yet no exploration of line features. In this report, we use lines as the primitive features to compute the geometric invariants for fast indexing into the geometric hash table containing the pre-processed model information. In addition, we analytically determine the effect of perturbations of line parameters on the computed invariant for the case where models are allowed to undergo affine transformation.
We have implemented the system with a series of experiments on polygonal objects, which are modeled by lines. It shows that the technique is noise resistant and suitable in an environment containing many occlusions.
-
Ph.D. Thesis
1991
Persistent LINDA: Design and implementation of a system to add transactions to LINDA
Anderson, Brian
Abstract
|
PDF
Title: Persistent LINDA: Design and implementation of a system to add transactions to LINDA
Candidate: Anderson, Brian
Advisor(s): Shasha, Dennis
Abstract:
Persistent Linda (PLinda hereafter) is based on the shared tuple space model of Linda. PLinda extends the model to facilitate the manipulation of sets and to implement transactional persistence. Its operations are upward compatible with Linda's. We have chosen Linda as a basis for the following reasons:
1) A shared memory model is the language of much parallel algorithms work, so implementing a parallel algorithm is easiest in that model. At the same time, a persistent data store is most useful as a shared resource.
2) In a distributed system, the cost of sending a message is often dominated by the cost of setting up the message. By encapsulating accesses into large semantic units (i.e. Linda tuples) as opposed to machine-dependent units such as words, Linda reduces the number of data transfers, thereby reducing set-up overhead. Shared data stores are also best accessed in large chunks for the same reason.
3) Associative retrieval of tuples is a convenient abstraction to the parallel programmer and is a good target of optimization for the database implementer.
-
Ph.D. Thesis
1991
A Theory of Natural Learning
Botta, Alexander
Abstract
|
PDF
Title: A Theory of Natural Learning
Candidate: Botta, Alexander
Advisor(s): Davis, Ernest
Abstract:
Unsupervised learning is based on capturing regularities in data. We formalize the vague notion of regularity, using the concept of algorithmic information (Solomonoff,Chaitin, Koppel). We present a theory on how regularities are induced and accumulated. A generative model captures a regularity if it achieves compression. A basic regularity is a building block for hierarchical structures. We prove that a basic regularity may be identified as a local maximum in compressibility. Stepwise induction is a polynomial-time approach to structures whose basic components have bound complexity.
Agents exploring a universe engage in active learning. The regularities of their sensory-motor streams are similar to Piaget's schemes and constituents of an induced ontology. We illustrate these ideas on three microworlds. First are Moore automata. State representations are constructed incrementally from results of tests when in that state and from outputs percieved on the way to that state. The second world contains loosely coupled geometric objects. They are basic regularities identifiable by stepwise induction. In the third world the agent has an elaborate eye and can move objects on a tiled surface. Statistical correlations between sets of stimuli are induced, then models are constructed to generate instances of new correlations from already known ones.
Algorithmic information theory allows a unified perspective on many areas of learning research. We define analysis as the separation of novelty in data from the already known. We present explanation based generalization as a well formalized instance of analysis, and constructive induction as an ill defined instance. We show EBG to specialize a theory through positive examples, and we prove it a language independent method, valid beyond the predicate calculus representations.
-
TR1991-579
1991
Differential Properties of Eigenvalues
Burke, J.;
Overton, M.
Abstract
|
PDF
Title: Differential Properties of Eigenvalues
Author(s): Burke, J.; Overton, M.
Abstract:
We define and study a directional derivative for two functions of the spectrum of an analytic matrix valued function. These are the maximum real part and the maximum modulus of the spectrum. Results are first obtained for the roots of polynomials with analytic coefficients by way of Puiseux-Newton series. In this regard, the primary analytic tool is the so called Puiseux-Newton diagram. These results are then translated into the context of matrices. Precise results are obtained when the eigenvalues that achieve the maximum value for the function under consideration are all either nondefective or nonderogatory. In the defective derogatory cases a general lower bound for the directional derivative is given which, in particular, describes those directions in which the directional derivative attains an infinite value.
- TR1991-568 1991 On the Subdifferentiability of a Matrix Spectrum II: Subdifferential Formulas Burke, J.; Overton, M. Abstract | PDF
- TR1991-567 1991 On the Subdifferentiability of a Matrix Spectrum I: Mathematical Foundations Burke, J.; Overton, M. Abstract | PDF
- TR1991-587 1991 New Theoretical and Computational Results for Regular Languages Chang, C.; Paige, R. Abstract | PDF
-
Ph.D. Thesis
1991
A Practical Method for Constructing Efficient LALR(k) Parsers with Automatic Error Recovery
Charles, Phillipe
Abstract
|
PDF
Title: A Practical Method for Constructing Efficient LALR(k) Parsers with Automatic Error Recovery
Candidate: Charles, Phillipe
Advisor(s): Schonberg, Edmond
Abstract:
LR parsing is used for a wide range of applications, including compiler construction, automatic code generation, language-specific editors and natural language processing. Currently, however, solutions have not been developed for practical multiple-lookahead parsing, fully-automatic error recovery, and space and time-efficient LR parsing across this wide-range of applications.
We present a practical framework for LR(k) parsing, for k > 1. We give an efficient algorithm that incrementally constructs an LALR(k) parser with varying- length lookahead strings, and whose symbols are consulted during parsing only when necessary.
Currently, effective LR error recovery systems require some user intervention. We describe an effective and fully automated syntactic error recovery method for LR(k) parsers. Finally, we present a generally effective method for compressing LR(k) parsing tables.
We have incorporated these innovations into a parser generator system that automatically constructs a production-quality parser with built-in error diagnostics and recovery. We will show examples of its performance on several programming languages.
-
Ph.D. Thesis
1991
Statistical Techniques for Parsing Messages
Chitrao, Mahesh
Abstract
|
PDF
Title: Statistical Techniques for Parsing Messages
Candidate: Chitrao, Mahesh
Advisor(s): Grishman, Ralph
Abstract:
Message processing is the extraction of information about key events described in brief narratives concerning a narrow domain. This is a suitable task for natural language understanding, since the amount of world knowledge required is limited. However, the messages are often ill-formed and therefore require the grammar which parses them to be quite forgiving. This often results in a proliferation of parses. This problem is compounded by one's inability to construct a complete domain model which would resolve all the semantic ambiguity. Thus, selection of the correct parse becomes an important goal for such systems.
Structural preference is a technique which helps disambiguation by assigning a higher preference to certain syntactic structures. The idea of statistical parsing evolved from the desire of being able to prefer certain structures over others on the basis of empirical observations, rather than ad-hoc judgement. In the framework of statistical parsing, every production of the grammar is assigned a priority, which is computed from a statistical analysis of a corpus.
There are two distinct methodologies that can be used for assigning these priorities. In Supervised Training , only the correct parses are used for training the grammar. On the other hand, Unsupervised Training uses parses independent of their semantic validity. After assigning the priorities, the parser searches for parses in a best-first order as dictated by these priorities.
When this scheme was incorporated into the PROTEUS message understanding system while processing OPREP (U.S. Navy Operational) messages, a two-fold advantage was observed. Firstly, the speed of the parsing increased, because rare productions tended not to get used at all. Secondly, since the parses were generated in the best-first order, the parses generated earlier on tended to be more likely and semantically more acceptable.
The performance of the modified parsing algorithm was evaluated with and without several refinements such as the use of context sensitive statistics and the use of heuristic penalties. The relative performances of the grammars trained by Supervised Training and Unsupervised Training were also compared.
-
TR1991-548
1991
Randomized Parallel Algorithms for Trapezoidal Diagrams
Clarkson, K. L.;
Cole, R.; Tarjan, R. E.
Abstract
|
PDF
Title: Randomized Parallel Algorithms for Trapezoidal Diagrams
Author(s): Clarkson, K. L.; Cole, R.; Tarjan, R. E.
Abstract:
We describe randomized parallel CREW PRAM algorithms for building trapezoidal diagrams of line segments in the plane. For general segments, we give an algorithm requiring optimal O(A + nlogn) expected work and optimal O(logn) time, where A is the number of intersecting pairs of segments. If the segments for a simple chain, we give an algorithm requiring optimal O(n) expected work and O(lognloglognlog*n) expected time, and a simpler algorithm requiring O(nlog*n) expected work. The serial algorithm corresponding to the latter is the simplest known algorithm requiring O(nlog*n) expected operations. For a set of segments forming K chains, we give an algorithm requiring O(A + nlog*n + Klogn) expected work and O(lognloglognlog*n) expected time. The parallel time bounds require the assumption that enough processors are available, with processor allocations every logn steps.
-
TR1991-546
1991
An Asynchronous Parallel Algorithm for Undirected Graph Connectivity
Cole, R.;
Zajicek, O.
Abstract
|
PDF
Title: An Asynchronous Parallel Algorithm for Undirected Graph Connectivity
Author(s): Cole, R.; Zajicek, O.
Abstract:
An algorithm for computing the components of an undirected graph in the (asynchronous) APRAM model is given; the algorithm uses O(n + e) processes and O(log n) rounds.
- TR1991-573 1991 Online Algorithms for Finger Searching Cole, R.; Raghunathan, A. Abstract | PDF
-
TR1991-557
1991
On the Detection of Robust Curves
Cole, R.;
Vishkin, U.
Abstract
|
PDF
Title: On the Detection of Robust Curves
Author(s): Cole, R.; Vishkin, U.
Abstract:
Given m points in the plane and a threshold t, a curve is defined to be robust if at least t points lie on it. Efficient algorithms for detecting robust curves are given; the key contribution is to use randomized sampling. In addition, an approximate version of the problem is introduced. A geometric solution to this problem is given; it too can be enhanced by randomization.
These algorithms are readily generalized to solve the problem of robust curve detection in a scene of curve fragments: given a set of curve segments, a curve oe is defined to be robust if curve segments of total length at least l lie on oe. Again, both an exact and an approximate version of the problem are considered.
The problems and solutions are closely related to the well-investigated Hough Transform technique.
-
TR1991-539
1991
The APRAM - The Rounds Complexity Measure and the Explicit Costs of Synchronization
Cole, R.;
Zajicek, O.
Abstract
|
PDF
Title: The APRAM - The Rounds Complexity Measure and the Explicit Costs of Synchronization
Author(s): Cole, R.; Zajicek, O.
Abstract:
This paper studies the explicit costs of synchronization by examining an asynchronous generalization of the PRAM model called the APRAM model. The APRAM model and its associated complexity measure, the rounds complexity, are defined and then illustrated by designing and analyzing two algorithms: a parallel summation algorithm which proceeds along an implicit complete binary tree and a recursive doubling algorithm which proceeds along a linked list. In both cases replacing global synchronization with local synchronization yields algorithms with reduced complexity.
-
TR1991-574
1991
The Expected Advantage of Asynchrony
Cole, R.;
Zajicek, O.
Abstract
|
PDF
Title: The Expected Advantage of Asynchrony
Author(s): Cole, R.; Zajicek, O.
Abstract:
This paper studies the implicit costs of synchronization and the advantage that may be gained by avoiding synchronization in asynchronous environments. An asynchronous generalization of the PRAM model called the APRAM model is used and appropriate complexity measures are defined. The advantage asynchrony provides is illustrated by analyzing two algorithms: a parallel summation algorithm which proceeds along an implicit complete binary tree and a recursive doubling algorithm which proceeds along a linked list.
-
Ph.D. Thesis
1991
On the satisfiability problem for unquantified classes of formulae involving set-theoretical and topological constructs
Cutello, Vincenzo
Abstract
|
PDF
Title: On the satisfiability problem for unquantified classes of formulae involving set-theoretical and topological constructs
Candidate: Cutello, Vincenzo
Advisor(s): Schwartz, Jacob T.
Abstract:
In this thesis we prove the solvability of the satisfiability problem for various classes of unquantified set-theoretical formulae. In particular, we will provide satisfiability tests that given a formula as input produce a model for it, if any exists. We will also show how the decidability of certain fragments of set theory can be used to prove the solvability of the satisfiability problem for some unquantified languages involving topological notions. In particular, a list of topological statements whose validity can be checked by our algorithms is given. The underlying motivation for this study is to enrich the class of theoretical results that can be used for a set-theoretic proof verifier; we also provide lower bounds for what is undecidable in set theory and topology.
- TR1991-590 1991 Axiomating Qualitative Process Theory Davis, E. Abstract | PDF
-
TR1991-565
1991
Lucid Representations
Davis, E.
Abstract
|
PDF
Title: Lucid Representations
Author(s): Davis, E.
Abstract:
This paper criticizes the widespread idea that knowledge bases in AI systems should be complete and that representations should be "model-like." The arguments in favor of such representations are less cogent and more ambiguous than they appear at first. Levesque's suggestion that representations should be "vivid" is extremely restrictive, particularly in its uniform imposition of a closed-world assumption. Spatial representations that are adequate for reasoning about a wide range of physical phenomena must ultimately either use complex symbolic reasoning or deal with partial and imperfect approximations. Requiring that temporal representations be fully detailed simulations will often be extremely inefficient. Finally, a flexible intelligent system must necessarily deal with partial information of all kinds, and the techniques for carrying out reasoning about partial information using complete representations are very limited in their application.
-
TR1991-541
1991
The Kinematics of Cutting Solid Objects
Davis, E.
Abstract
|
PDF
Title: The Kinematics of Cutting Solid Objects
Author(s): Davis, E.
Abstract:
This paper studies how the cutting of one solid object by another can be described in a formal theory. We present two alternative first-order representations for this domain. The first views an object as gradually changing its shape until it is split, at which time the original object ceases to exist and two (or more) new objects come into existence. The second focusses instead on chunks of material which are part of the overall object. A chunk persists with constant shape until some piece of it is cut away, when the chunk ceases to exist. We prove that the two theories are equivalent under ordinary circumstances, and we show that they are sufficient to support some simple commonsense inferences and algorithms.
-
TR1991-570
1991
Additive Schwarz Methods for Elliptic Finite Element Problems in Three Dimensions
Dryja, M.;
Widlund, O.
Abstract
|
PDF
Title: Additive Schwarz Methods for Elliptic Finite Element Problems in Three Dimensions
Author(s): Dryja, M.; Widlund, O.
Abstract:
Many domain decomposition algorithms and certain multigrid methods can be described and analyzed as additive Schwarz methods. When designing and analyzing domain decomposition methods, we encounter special difficulties in the case of three dimensions and if the coefficients are discontinuous and vary over a large range. In this paper, we first introduce a general framework for Schwarz methods. Three classes of applications are then considered: certain wire basket based iterative substructuring methods, Neumann-Neumann algorithms with low dimensional, global subspaces and a modified form of a multilevel algorithm introduced by Bramble, Pasciak and Xu.
-
TR1991-571
1991
Efficient Algorithms for Cyclic Scheduling
Gasperoni, F.;
Schwiegelshohn, U.
Abstract
|
PDF
Title: Efficient Algorithms for Cyclic Scheduling
Author(s): Gasperoni, F.; Schwiegelshohn, U.
Abstract:
This work addresses the problem of non-preemptively scheduling a cyclic set of interdependent operations, representing for instance a program loop, when p identical processors are available. For p = 1 we give a simple, efficient, polynomial time algorithm producing optimum results. When p ! 1 the problem becomes NP-hard and a slight modification of our algorithm generates provably close to optimum results.
We consider real-time systems in which the value of a task is proportional to its computation time. The system obtains the value of a given task if the task completes by its deadline. Otherwise, the system obtains no value for the task.
When such a system is underloaded (i.e. there exists a schedule for which all tasks meet their deadlines), Dertouzos showed that the earliest deadline first algorithm will achieve 100% of the possible value. We consider the case of a possibly overloaded system and present an algorithm which: 1. behaves like the earliest deadline first algorithm when the system is underloaded. 2. obtains at least 1/4 of the maximum value that an optimal clairvoyant algorithm could obtain even when the system is overloaded.
We implement our algorithm with an amortized cost of O(log n) time per task, where n bounds the number of tasks in the system at any instant.
-
Ph.D. Thesis
1991
Scheduling for Horizontal Systems: The VLIW Paradigm in Persepctive
Gasperoni, Franco
Abstract
|
PDF
Title: Scheduling for Horizontal Systems: The VLIW Paradigm in Persepctive
Candidate: Gasperoni, Franco
Advisor(s): Schonberg, Edmond
Abstract:
This work focuses on automatic extraction of operation level parallelism from programs originally intended to be sequential. Optimality issues in the framework of very long instruction word architectures and compilers (VLIW) are investigated. Possible advantages of an idealized dynamic scheduler over a purely static one are also explored. More specifically the model and the results of scheduling theory are extended to account for cyclicity and branching capabilities present in sequential programs. The existence of inherent bottlenecks in the VLIW paradigm is substantiated and the advantage of dynamic over static scheduling is demonstrated for certain type of loops. A novel technique for efficient parallelization of straight line loops is presented. A simple scheduling heuristic for arbitrary programs is proven to perform between a constant and a logarithmic factor from appropriately defined optimality criteria. Finally it is proven the existence of loops containing branches for which no parallel program can achieve time optimal performance on VLIWs with unlimited resources. The overall aim of the thesis is to identify the family of sequential programs for which the VLIW model of parallel computation is viable.
-
TR1991-586
1991
On Shape Optimizing the Ratio of the First Two Eigenvalues of the Laplacian
Haeberly, J.
Abstract
|
PDF
Title: On Shape Optimizing the Ratio of the First Two Eigenvalues of the Laplacian
Author(s): Haeberly, J.
Abstract:
We investigate numerically a 1956 conjecture of Payne, Polya, and Weinberger. The conjecture asserts that the ratio of the first two eigenvalues of the Laplacian on a bounded domain \Omega of the plane with Dirichlet boundary conditions reaches its minimum value precisely when \Omega is a disk. A crucial feature of this problem is the loss of smoothness of the objective function at the solution. The following results form the core of our numerical treatment. First, we construct finite dimensional families of deformations of a disk equipped with a uniform triangulation. This permits the formulation of a discrete model of the problem via finite element techniques. Second, we build on the work of M. Overton to derive optimality conditions in terms of Clarke's generalized gradients for nonsmooth functions. These ideas are then combined into an algorithm and implemented in Fortran.
-
TR1991-556
1991
Programming with Structures, Functions, and Objects
Henglein, F.;
Laufer, K.
Abstract
|
PDF
Title: Programming with Structures, Functions, and Objects
Author(s): Henglein, F.; Laufer, K.
Abstract:
We describe program structuring mechanisms for integrating algebraic, functional and object-oriented programming in a single framework. Our language is a statically typed higher-order language with specifications, structures, types, and values, and with universal and existential abstraction over structures, types, and values.
We show that existential types over structures generalize both the necessarily homogeneous type classes of Haskell and the necessarily heterogeneous object classes of object-oriented programming languages such as C++ or Eiffel. Following recent work on ML, we provide separate linguistic mechanisms for reusing specifications and structures. Subtyping is provided in the form of explicit type conversions.
The language mechanisms are introduced by examples to emphasize their pragmatic aspects. We compare them with the mechanisms of XML+, Haskell and Eiffel and give a type-theoretic perspective. These mechanisms have been developed within a larger, ongoing prototyping language design project.
-
TR1991-585
1991
Efficient Loop-Level Parallelishm in ADA
Hind, M.
Abstract
|
PDF
Title: Efficient Loop-Level Parallelishm in ADA
Author(s): Hind, M.
Abstract:
Parallelism in scientific applications can most often be found at the loop level. Although Ada supports parallelism via the task construct, its coarseness renders it unsuitable for this light-weight parallelism. In this work, we propose Ada constructs to achieve efficient loop-level parallelism in ANSI-Ada. This is accomplished in two steps. First, we present an idiom that allows the specification of light-weight tasks. Second, we give an efficient implementation of this idiom that is considerably more efficient than a standard Ada task.
In addition, we present an idiom that makes the fetch and add synchronization primitive available at the Ada level. Our implementation of this idiom is more efficient in both time and space than previous results. In addition to providing universal synchronization, using fetch and add simplifies program analysis (e.g. proving the absence of race conditions in the implementation of a parallel algorithm). Since all these idioms are written in standard Ada, they maintain the portability that is central to the mandated uses of the language.
-
Ph.D. Thesis
1991
Efficienty Loop-Level Parallelism in ADA
Hind, Michael
Abstract
|
PDF
Title: Efficienty Loop-Level Parallelism in ADA
Candidate: Hind, Michael
Advisor(s): Schonberg, Edmond
Abstract:
Parallelism in scientific applications can most often be found at the loop level. Although Ada supports parallelism via the task construct, its coarseness renders it unsuitable for this light-weight parallelism. In this work, we propose Ada constructs to achieve efficient loop-level parallelism in ANSI-Ada. This is accomplished in two steps. First, we present an idiom that allows the specification of light-weight tasks. Second, we give an efficient implementation of this idiom (for a variety of shared memory machines) that is considerably more efficient than a standard Ada task.
In addition, we present an idiom that makes the fetch_and_add synchronization primitive available at the Ada level. Our implementation of this idiom is more efficient in both time and space than previous results. In addition to providing universal synchronization, using fetch_and_add simplifies program analysis (e.g. proving the absence of race conditions in the implementation of a parallel algorithm). Since all these idioms are written in standard Ada, they maintain the portability that is central to the mandated uses of the language.
-
Ph.D. Thesis
1991
Segmentation and Surface-Based Modeling Objects in Three-Dimensional Biomedical Images
Kalvin, Alan
Abstract
|
PDF
Title: Segmentation and Surface-Based Modeling Objects in Three-Dimensional Biomedical Images
Candidate: Kalvin, Alan
Advisor(s): Hummel, Robert
Abstract:
The rapid development of technologies for imaging the human body has led to a growing interest in the extraction and analysis of objects in 3D biomedical images for applications in fields such as clinical medicine, biomedical research, and physical anthropology.
This dissertation examines the problem of creating surface-based geometric models of biomedical objects that are suitable for analysis through visualization, mensuration, and manipulation. This is a two-stage problem. First the objects are identified by segmenting the 3D image into regions of interest, and then surface-based models of the objects are created.
We discuss the issues of segmentation and surface construction and introduce the following new methods for solving these problems.
First, we present the MLO algorithm, a general-purpose, domain-independent segmentation algorithm that has been applied successfully to identify skulls in CT images, the ventricle walls of the heart in MR images, brain ventricles in CT images, and carotid arteries in MR angiography images. It uses an iterative, cooperative procedure to segment an image by optimizing a cost function. To achieve a fast segmentation, a coarse-to-fine strategy is employed, using a multiresolution pyramid.
The GRG algorithm is a model-driven, special-purpose algorithm for identifying thin bone in CT head images. The algorithm, developed specifically for craniofacial surgical planning, uses anatomical knowledge in the segmentation process, and can handle the abnormal anatomy of craniofacial patients. It successfully finds most of the thin bone that can not be found using previous methods.
ALLIGATOR is a surface construction algorithm that creates models using the ``winged-edge'' data structure of Baumgart, enabling efficient access to the topological and geometric information of the surfaces, and permitting efficient, topologically consistent modifications to the representations.
Unlike previous surface construction algorithms, ALLIGATOR is suitable not just for visualizing biomedical objects, but for measuring and manipulating them as well. Another important feature of ALLIGATOR is that it uses an adaptive face-merging process to create surface models that are significantly more concise, in terms of vertices, edges, and faces, than the models produced by other surface construction algorithms.
-
Ph.D. Thesis
1991
The Development of Parallel Image Algorithms by Prototyping
Kelly, Robert
Abstract
|
PDF
Title: The Development of Parallel Image Algorithms by Prototyping
Candidate: Kelly, Robert
Advisor(s): Hummel, Robert
Abstract:
We examine the process of parallel algorithm development for a class of image synthesis and image processing problems. Algorithms are developed for a class of parallel machines characterized by shared memory multiprocessors, such as is exhibited by the Ultracomputer model. The new algorithms are asynchronous in nature, and many employ the ``pool of tasks'' paradigm. These algorithms are prototyped using the sequential specification language SETL that has been adapted to function as a parallel specification tool. The issue of refinement of the high-level specification is illustrated with a number of examples of machine-specific implementations.
Parallel algorithms are proposed for the connected components problem, for hidden surface removal in surface rendering, and parallel algorithms for ray tracing are discussed. Within the investigation of connected components algorithms, new algorithms are suggested for four classes of approaches to the problem: (1) Adjacency matrix methods, (2) pointer graph methods based on the vertex collapse algorithm of Hirschberg, (3) pointer graph methods based on the Shiloach/Vishkin connected components algorithm, and (4) image scan algorithms, based on the sequential raster scan ``blob coloring'' algorithm. For the third area, the Shiloach/Vishkin-type connected components algorithm, we show how a stronger model of computation (one that permits constant-time concurrent additive-writes) allows the elimination of one of the steps of the algorithm. Although this modification does not improve the asymptotic time complexity of the algorithm, the MIMD version of the Shiloach/Vishkin algorithm is then considerably simplified, and contains fewer synchronization points, and has improved expected execution time performance.
All algorithms are given in the parallel-adapted SETL language. The final versions of all proposed parallel connected components algorithms are further refined into EPEX/Fortran, suitable for execution on an RP3 simulator system. Empirical results are obtained for various algorithms, by use of instrumenting either the SETL version or the EPEX/Fortran version, thereby providing estimates of expected performance times by means of examining average lengths of queues of tasks. In particular, queue activity patterns are examined for executions of the parallel adjacency matrix connected components algorithm, and the MIMD version of the Shiloach/Vishkin connected components algorithm. For the latter, run-time performance estimates are made demonstrating the utility of the modifications made to the MIMD version of the algorithm. For the image scan algorithms, estimates are obtained comparing the size of subimages that are assigned to processors against the sizes of the reduced graph connected components problem that result, based on runs of the EPEX/Fortran version. Finally, the shared-memory access patterns of the parallel ray-tracing algorithm are examined, suggesting that the algorithm is viable in terms of memory contention rates.
- TR1991-572 1991 An Optimal Scheduling Algorithm with a Competitive Factor for Real-Time Systems Koren, G.; Shasha, D. Abstract | PDF
-
Ph.D. Thesis
1991
Semantically Based Concurrent Data Structure Algorithms
Lanin, Vladimir
Abstract
|
PDF
Title: Semantically Based Concurrent Data Structure Algorithms
Candidate: Lanin, Vladimir
Advisor(s): Shasha, Dennis
Abstract:
A computational environment is called concurrent when it allows several threads of sequential control, or processes, to overlap in time and to communicate with each other. Such an environment is called synchronous when the length of time it takes any process to execute any sequence of steps can be determined in advance. When such a calculation is impossible (at least to the precision required), the environment is called asynchronous.
Algorithms designed to work in the asynchronous concurrent environment have appeared in the literature for such data structures as B-trees, hash tables, and queues. The most common standard of correctness for a concurrent algorithm is serializability, which requires that the effects of a concurrent computation be equivalent to some serial composition of the same actions. However, several notions of ``equivalence'' exist, depending on whether they take into account the semantics of the data structure,or only the syntax of the computation.
We examine the drawbacks and advantages of several correctness standards, and identify a particular standard to be of general utility. Furthermore, we formalize the notion of decisive operations, and show how it can be applied to greatly simplify semantic serializability proofs.
We apply the concepts of syntactic and semantic serializability to the development of several novel algorithms, including an extension of the tree protocol to changing trees, a highly concurrent B-tree algorithm, and a wait-free set manipulation algorithm. Useful techniques appearing in the design are identified, and the correctness proofs serve as examples of the techniques previously described.
- TR1991-555 1991 Comparing Three Approaches to Transformational Programming Laufer, K. Abstract | PDF
-
Ph.D. Thesis
1991
On the Optimization of Term Rewriting
Li, Ke
Abstract
|
PDF
Title: On the Optimization of Term Rewriting
Candidate: Li, Ke
Advisor(s): Kedem, Zvi
Abstract:
Term rewriting systems (TRSs) are widely applied in automated theorem proving, equational languages, logic programming, specification of software and hardware, and other symbolic computations. An important computation procedure in applications of TRSs is to reduce a term to its normal form. The research of this thesis examines the complexity of normal form computation and explores efficient rewriting strategies for it.
A rewriting strategy is said optimal if for any term, it always generates the shortest derivation when computing the term's normal form. First, we prove that a universal optimal rewriting strategy for any canonical TRS does not exist unless NP = P . We prove the same result for AC TRS in which some functions are associative and commutative.
To find efficient rewriting strategies, we divide TRSs into three categories: variable-more , variable-equal , and variable-fewer , and propose optimal rewriting strategies for the first two categories and approximate strategies for the last.
We have done experiments on RRL (Rewrite Rule Laboratory) -- an automated theorem prover with term rewriting as the basic inference rule. The experiment output confirm our theoretical results. Based on our theory and experiment results, we improve RRL by implementing new programs that automatically choose an efficient rewriting strategy for an given term rewriting system.
-
Ph.D. Thesis
1991
The Design and Implementation of ALLOY, a Higher Level Parallel Programming Language
Mitsolides, Thanasis
Abstract
|
PDF
Title: The Design and Implementation of ALLOY, a Higher Level Parallel Programming Language
Candidate: Mitsolides, Thanasis
Advisor(s): Harrison, Malcolm C.
Abstract:
The goal of this thesis is to show that it is possible to define a parallel higher level programming language for programming in the large which will be able to easily express both complicated parallel problems and traditional serial ones. Such a language would provide many good features of serial and parallel programming languages and be appropriate for programming massively parallel computing systems. To demonstrate this a simple language, called ALLOY, was designed. The main features of this language, could be incorporated into other languages.
ALLOY, directly supports functional, object oriented and logic programming styles in a unified and controlled framework. Evaluating modes support serial or parallel execution, eager or lazy evaluation, non-determinism or multiple solutions. These modes can be combined freely. ALLOY is simple, utilizing only 29 primitives, half of which are for object oriented programming.
The power of ALLOY is demonstrated through the use of a wide variety of examples. Some of the examples are: a) partition sort and FP library demonstrating clarity, efficiency, and simple parallelism, b) prime numbers and buffering demonstrating the ability to select between eager and lazy evaluation, c) systolic sort and merge sort demonstrating dynamic networks of communicating processes, d) N-queens and list permutations demonstrating serial and parallel searching. A library is given for programming in logic programming styles. Finally a number of parallel objects demonstrate ALLOY's ability to exploit massively parallel architectures effectively.
An interpreter of ALLOY together with a number of utilities and a programming environment has been written in Common Lisp. The system is available for anonymous ftp. It is shown that ALLOY can have reasonably efficient implementation on shared memory multiprocessor (MIMD) systems supporting highly parallel operations, on distributed architectures, and possibly on Data Flow architectures as well.
-
TR1991-562
1991
Decomposition and Fictitious Domains Methods for Elliptic Boundary Value Problems
Nepomnyaschikh, S.
Abstract
|
PDF
Title: Decomposition and Fictitious Domains Methods for Elliptic Boundary Value Problems
Author(s): Nepomnyaschikh, S.
Abstract:
Boundary value problems for elliptic second order equations in three-dimensional domains with piecewise smooth boundaries are considered. Discretization of the problem is performed using a conventional version of the finite element method with piecewise linear basis functions. The main purpose of the paper is the construction of a preconditioning operator for the resulting system of grid equations. The method is based on two approaches: decomposition of the domain into subdomains and using a new version of the method of fictitious domains. The rate of convergence of the corresponding conjugate gradient method is independent of both the grid size and the number of subdomains.
-
TR1991-566
1991
Optimality Conditions and Duality Theory for Minimizing Sums of the Largest Eigenvalues of Symmetric Matrices
Overton, M.;
Womersley, R.
Abstract
|
PDF
Title: Optimality Conditions and Duality Theory for Minimizing Sums of the Largest Eigenvalues of Symmetric Matrices
Author(s): Overton, M.; Womersley, R.
Abstract:
This paper gives max characterizations for the sum of the largest eigenvalues of a symmetric matrix. The elements which achieve the maximum provide a concise characterization of the generalized gradient of the eigenvalue sum in terms of a dual matrix. The dual matrix provides the information required to either verify first-order optimality conditions at a point or to generate a descent direction for the eigenvalue sum from that point, splitting a multiple eigenvalue if necessary. A model minimization algorithm is outlined, and connections with the classical literature on sums of eigenvalues are explained. Sums of the largest eigenvalues in absolute value are also addressed.
Keywords: symmetric matrix, maximum eigenvalues, spectral radius, minimax problem, max characterization, generalized gradient.
-
Ph.D. Thesis
1991
Semantic program analyses for storage management optimizations in functional language implementations
Park, Young G.
Abstract
|
PDF
Title: Semantic program analyses for storage management optimizations in functional language implementations
Candidate: Park, Young G.
Advisor(s): Goldberg, Benjamin
Abstract:
One of the major overheads in implementing functional languages in both uniprocessor and multiprocessor environments is the storage management overhead due to dynamic allocation and automatic reclamation of indefinite- extent storage. We investigate compiler optimization to reduce such overhead by statically inferring the lifetime information about dynamically-allocated objects.
We have developed a set of compile-time semantic analyses for a higher-order monomorphic strict functional language based on denotational semantics and abstract interpretation:
- Escape Analysis: provides information about the relative lifetimes of objects with respect to the activation of the function call.
- Refined Escape Analysis: provides, as a refinement of escape analysis, information about the lifetimes of components of aggregate structures.
- Reference Escape Analysis: provides information about the relative lifetimes of references created within a function with respect to the activation of the function call.
- Order-of-Demand Analysis: provides information about the order in which the values of bound variables are demanded, and thus allows to compute a range of information including strictness, evaluation-order and evaluation-status information.
Those analyses are extended to both polymorphic and non-strict (either normal- order evaluation or lazy evaluation) languages.
Using statically inferred escape information, we have proposed a variety of storage management optimization techniques including stack allocation, explicit reclamation, in-place reuse, reference counting elimination, block allocation/reclamation, and improving generational garbage collection.
-
TR1991-580
1991
An Additive Schwarz Method for the P-Version Finite Element Method
Pavarino, L.
Abstract
|
PDF
Title: An Additive Schwarz Method for the P-Version Finite Element Method
Author(s): Pavarino, L.
Abstract:
The additive Schwarz method was originally proposed for the h-version finite element method for elliptic problems. In this paper, we apply it to the p-version, in which increased accuracy is achieved by increasing the degree of the elements while the mesh is fixed. We obtain a constant bound, independent of p, for the condition number of the iteration operator in two and three dimensions. The result holds for linear, self adjoint, second order elliptic problems and for quadrilateral elements.
-
Ph.D. Thesis
1991
Counting Real Zeros
Pedersen, Paul
Abstract
|
PDF
Title: Counting Real Zeros
Candidate: Pedersen, Paul
Advisor(s): Mishra, Bud
Abstract:
This thesis presents an n -dimensional generalization of Hermite's theorem for counting real roots of a polynomial using quadratic forms. We solve the problem of counting the number of real solutions of a system of polynomial equations within an algebraic polyhedron in n -dimensional space, where the polynomials are taken to have rational coefficients.
Our algorithm is purely symbolic, which means that it may be used to implement infinite-precision algorithms for arithmetic in the real-algebraic subset of the real numbers. We present algorithms for doing this as an application of the general theory.
Our algorithms are based on resultant theory, both because this theory provides insights into the algorithms, and because it makes possible a comparatively clear complexity analysis which shows the algorithms to be worst-case optimal, i.e., singly exponential in the degree of the polynomials.
-
Ph.D. Thesis
1991
Combinatorial and algorithmic analysis of stabbing and visibility problems in three-dimensional space
Pellegrini, Marco
Abstract
|
PDF
Title: Combinatorial and algorithmic analysis of stabbing and visibility problems in three-dimensional space
Candidate: Pellegrini, Marco
Advisor(s): Pollack, Richard
Abstract:
Given a set $T$ of triangles in 3-space, with $\vert T\vert$ = $n$, let ${\cal S}$($T$) be the set of all lines stabbing the set $T$. The combinatorial descriptive complexity of ${\cal S}$($T$) is denoted by #${\cal S}$($T$). The following questions about ${\cal S}$($T$) are considered in this thesis: (a) answer the query given a line $l$, is $l$ $\in$ ${\cal S}$($T$)? (query problem). (b) decide whether ${\cal S}$($T$) $\ne$ 0 (existence problem). (c) Give upper and lower bounds on #${\cal S}$($T$). The following results are shown in this thesis: (1) There is an $\Omega (n\sb3$) lower bound for #${\cal S}$($T$). Also ${\cal S}$($T$) may have $\Omega (n\sb2$) connected components. (2) There is an $O (n\sp{3+\epsilon}$) upper bound on #${\cal S}$($T$). Within the same time bound it is possible to solve the existence problem. (3) The existence problem for triangles on a set of planes with $g$ different plane inclinations can be solved in $O(g\sp2 n\sp2 {\rm log}\ n$) time. (4) The query problem is solvable in $O(n\sp{2+\epsilon}$) preprocessing and storage and logarithmic $O({\rm log}\ n$) query time. (5) The results (1), (2), (3) and (4) extend, with the same asymptotic bounds, to sets of convex polyhedra with total complexity $n$. Given a set $T$ of n disjoint triangles, the ray shooting problem for $T$ is the following: preprocess $T$ so to be able to answer queries of the form Given a ray $\rho$, does $\rho$ hit any triangle in $T$?. The following results are shown in this thesis: (1) Using $O(n\sp{3+\epsilon}$) randomized preprocessing time and storage we can solve ray-shooting queries in $O (\sqrt{n}{\rm log}\sp2 n$) worst case query time. (2) If we are given $m$ $>$ $n\sp{7/5}$ rays and $n$ disjoint triangles, we can answer all the ray shooting queries in $O (m\sp{5/6-\delta} n\sp{5/6+5\delta}$log $n$ + $m$ log $n$ + $n$ log $m$) randomized expected time and $O (m+n$) space, for every $\delta$ $>$ 0. The multiplicative constants depend on $\delta$. (3) Given $m$ rays and $n$ axis-oriented boxes we can answer ray shooting queries in randomized expected time $O(m\sp{3/4-\delta}n\sp{3/4+3\delta}\log\sp3n+m$ log$\sp3n+n$ log $m$) and $O(m+n$) space, for 1/28 $<$ $\delta$ $<$ 1/9. The multiplicative constants depend on $\delta$.
-
Ph.D. Thesis
1991
Properties of Convex Polytopes
Prabhu, N.
Abstract
|
PDF
Title: Properties of Convex Polytopes
Candidate: Prabhu, N.
Advisor(s): Pollack, Richard
Abstract:
The thesis presents some results about the boundary complexes of convex polytopes.
- 1.
- The intersection of affine subspaces with the boundary complexes of convex polytopes: We show that the lower bound on the dimension of a subspace that intersects the relative interiors of all j -faces of a d -polytope is 2( d - j ). We also show that every d -simplex attains the above lower bound; hence the bound is tight. Further, using neighborly polytopes, we construct polytopes with arbitrarily large number of vertices which attain the above lower bound.
- 2.
- Hamiltonian simple polytopes: Given integers n and d , n > d , does there exist a simple d -polytope with n vertices? We show that for all
$n > c d\sqrt{d}$
( c a constant) one can construct a simple d -polytope with n vertices. In fact for all$n>c d \sqrt{d},$
we construct a Hamiltonian simple d -polytope with n vertices. The Hamiltonicity of the constructed polytopes improves a result of Victor Klee. - 3.
- Construction of a 4-dimensional polytope to show that in general one cannot find a hyperplane in R d that contains a given pair of vertices of a d -polytope and has two or more facets of the polytope in one of the closed halfspaces.
- 4.
- A generalization of Balinski's Theorem: Balinski showed that the graph of every d -polytope is d -connected, i.e., removing any d -1 vertices does not disconnect the remaining subgraph. However, removing all the vertices of a j -face ( j < d ) leaves the remaining subgraph ( d - j -1)-connected and this bound is tight for j < d -1.
- 5.
- A conjecture of Micha Perles: Perles conjectured that every induced, connected, ( d -1)-regular subgraph of the graph of a simple d -polytope determines a facet of the polytope. Generalizing Perles' conjecture to triangulated spheres, leads to a question about the existence of a certain triangulation of the 3-ball and the solid torus. We show that neither the 3-ball nor the solid torus admits the required triangulation. Further we prove Perles' conjecture for some subclasses of simple polytopes and prove a few reduction theorems.
- TR1991-554 1991 On a Parallel Implementation of Geometric Hashing on the Connection Machine Rgoutsos, I.; Hummel, R. Abstract | PDF
- TR1991-553 1991 Scalable Parallel Geometric Hashing for Hypercube SIMD Architechtures Rigoutsos, I.; Hummel, R. Abstract | PDF
- TR1991-561 1991 Amortized Complexity of Data Structures Sundar, R. Abstract | PDF
-
Ph.D. Thesis
1991
Amortized Complexity of Data Structures
Sundar, Rajamani
Abstract
|
PDF
Title: Amortized Complexity of Data Structures
Candidate: Sundar, Rajamani
Advisor(s): Boppana, Ravi
Abstract:
This thesis investigates the amortized complexity of some fundamental data structure problems and introduces interesting ideas for proving lower bounds on amortized complexity and for performing amortized analysis. The problems are as follows:
- Dictionary Problem: A dictionary is a dynamic set that supports searches of elements and changes under insertions and deletions of elements. It is open whether there exists a dictionary data structure that takes constant amortized time per operation and uses space polynomial in the dictionary size. We prove that dictionary operations require log-logarithmic amortized time under a multilevel hashing model that is based on Yao's cell probe model.
- Splay Algorithm's Analysis: Splay is a simple, efficient algorithm for searching binary search trees, devised by Sleator and Tarjan, that uses rotations to reorganize the tree. Tarjan conjectured that Splay takes linear time to process deque operation sequences on a binary tree and proved a special case of this conjecture called the Scanning Theorem: We prove tight bounds on the maximum numbers of various types of right rotations in a sequence of right rotations performed on a binary tree. One of the lower bounds refutes a conjecture of Sleator. We apply the upper bounds to obtain a nearly linear upper bound for Tarjan's conjecture. We give two new proofs of the Scanning Theorem, one of which is a potential-based proof that solves a problem of Tarjan.
- Set Equality Problem: The task of maintaining a dynamic collection of sets under various operations arises in many applications. We devise a fast data structure for maintaining sets under equality-tests and under creations of new sets through insertions and deletions of elements. Equality-tests require constant time and set-creations require logarithmic amortized time. This improves previous solutions.
-
Ph.D. Thesis
1991
Performance Evaluation of Solutions to the TLB Consistency Problem
Teller, Patricia
Abstract
|
PDF
Title: Performance Evaluation of Solutions to the TLB Consistency Problem
Candidate: Teller, Patricia
Advisor(s): Gottlieb, Allan
Abstract:
To implement virtual memory efficiently, virtual-to-physical address translation information is stored in page tables and cached in translation-lookaside buffers (TLBs). In multiprocessors with multiple TLBs, page-table modifications can result in outdated TLB entries, the use of which can cause erroneous memory accesses.
We propose three new solutions to this TLB consistency problem, which unlike existing solutions for highly-parallel shared-memory multiprocessors do not require interprocessor synchronization and communication, and neither interrupt processor execution nor introduce unnecessary serialization.
The cost of each of our solutions is embodied in the cost of TLB reloads, which load translation information for referenced pages into TLBs. Two assume TLBs at processors and one assumes TLBs at memory. We study their performance in scalable multiprocessor architectures via a trace-driven simulation system capable of simulating a range of systems using just one address trace.
Our results show that system performance improves if TLBs are located at memory, rather than processors, provided that memory is organized as multiple paging arenas, where the mapping of pages to arenas is fixed.
A class of parallel workloads can produce a number of TLB reloads, R, that grows linearly with N. A set of our simulations for processor-based TLBs validate this model.
A processor-based TLB reload costs O(log N) because of network transit. Thus, management of processor-based TLBs, be it consistency ensuring or not, has an overhead that grows as N log N.
The cost of a memory-based TLB reload within a paging arena can be made smaller than that of a processor-based TLB, since additional network transits are not required.
Simulation result show that when there is only one paging arena, memory-based TLBs exhibit generally larger miss rates than processor-based TLBs, and the related overhead is generally larger. When there are two paging arenas, memory-based TLBs produce smaller miss rates than processor-based TLBs of equal size, and the related overhead is generally smaller. To maintain low overhead for large machines, it is likely that the number of paging arenas must grow as O(N).
-
Ph.D. Thesis
1991
Applications and Analysis of Probabilistic Techniques
Tetali, Prasad
Abstract
|
PDF
Title: Applications and Analysis of Probabilistic Techniques
Candidate: Tetali, Prasad
Advisor(s): Spencer, Joel
Abstract:
The thesis illustrates the strength of randomness by applying some recent probabilistic techniques to solve problems in number theory, graph theory and computer science.
The first part of the thesis is concerned with random construction of integer sequences with certain additive properties. A set of natural numbers is called an asymptotic basis of order k , if every number (sufficiently large) can be expressed as a sum of k distinct numbers from the set. We prove that for every fixed k , there exists an asymptotic basis of order k such that the number of representations of n is
$\Theta (\log n)$
. The case k =2 was proved in 1956 by Paul Erdos.The second part deals with analysis of random walks on graphs. Random walks on graphs have been known to have interesting analogies in electrical networks. A precise characterization of effective resistance in electrical networks is provided in this thesis in terms of random walks on the underlying graphs. The interpretation of effective resistance yields interesting new results and new proofs for some known results. The main result here is an exact formula for the hitting time between two vertices in terms of the effective resistances in the network, settling an open question. This is much in the spirit of the commute time result by Ashok Chandra et al.
-
Ph.D. Thesis
1991
Resilient Computations in the Presence of Slow-Downs
Turek, John
Abstract
|
PDF
Title: Resilient Computations in the Presence of Slow-Downs
Candidate: Turek, John
Advisor(s): Shasha, Dennis; Cole, Richard
Abstract:
With the advent of low cost work stations, distributed systems are becoming increasingly attractive. However, as the number of components in the system increases so does the probability of some component failing. When system designers discuss fault-tolerance, they typically restrict themselves to the problem of handling fail-stop failures. This work proposes an enhanced failure model that allows processes to fail by either slowing down or stopping; slow processes may later speed up, continue to proceed slowly, or, eventually, stop. We call such failures slow-downs. The model does not assume the ability to distinguish among these possibilities, say, by using a timeout mechanism, nor does it assume that it is possible to kill a slow process.
This thesis presents several results in this context. We discuss how to execute transactions under the slow-down model when the correctness criteria is serializability. We then discuss how to transform a class of lock-based concurrent data structures into nonblocking data structures. Both results are developed in the context of a shared memory machine having an atomic compare&swap.
We conclude this thesis by giving algorithms that can be used to emulate a reliable shared memory with compare&swap on a message passing system prone to slow-downs.
-
Ph.D. Thesis
1991
Query Optimization in Database and Information Retrieval Systems
Wang, Tsong-Li
Abstract
|
PDF
Title: Query Optimization in Database and Information Retrieval Systems
Candidate: Wang, Tsong-Li
Advisor(s): Shasha, Dennis
Abstract:
Recently, several prototype and commercial systems based on a loosely-coupled shared-nothing architecture have been proposed and built for database applications. To achieve speed-ups proportional to the number of processors for operations such as selections and joins, such systems often distribute data across storage units using a hashing function. In the first part of this thesis, we investigate ways of minimizing response time for various multi-join queries in such systems. We develop a dynamic programming algorithm for queries whose closures are chains. We next prove the NP-completeness for more general queries and propose four heuristics for them. We then evaluate experimentally the relative performance of these heuristics and their performance relative to optimums. The empirical results show that a hybrid heuristic combining our chain algorithm with a heuristic related to Kruskal's spanning tree algorithm performs well.
In the second part of the thesis, we present a scheme to answer best-match queries from a file containing a collection of objects. A best-match query is to find the objects in the file which are closest (according to some (dis)similarity measure) to a given target.
Previous work suggested that one can reduce the computational effort required to achieve the desired results using the triangle inequality when starting with a data structure for the file which reflects some precomputed intrafile distances. We generalize the technique to allow the optimum use of any given set of precomputed intrafile distances. We then extend our scheme to a class of queries for retrieving similar or dissimilar objects that commonly arise in vision and molecular biology. Artificial data and actual protein sequences are used to illustrate the effectiveness of our scheme for different queries, and to compare its performance with previous algorithms.
Finally, we implement our techniques into a tree information system that enables users to retrieve and extract information from trees based on approximate comparison. We expect this system to have applications in pattern recognition, biology, linguistics, and programming languages. The system is implemented in C and X-windows, and is fully operational on SUN workstations.
-
TR1991-581
1991
Some Schwarz Methods for Symmetric and Nonsymmetric Elliptic Problems
Widlund, O.
Abstract
|
PDF
Title: Some Schwarz Methods for Symmetric and Nonsymmetric Elliptic Problems
Author(s): Widlund, O.
Abstract:
This paper begins with an introduction to additive and multiplicative Schwarz methods. A two-level method is then reviewed and a new result on its rate of convergence is established for the case when the overlap is small. Recent results by Xuejun Zhang, on multi-level Schwarz methods, are formulated and discussed. The paper is concluded with a discussion of recent joint results with Xiao-Chuan Cai on nonsymmetric and indefinite problems.
-
Ph.D. Thesis
1991
Toward a Fully Integrated VLSI CAD System: from Custom to Fully Automatic
You, Yongtao
Abstract
|
PDF
Title: Toward a Fully Integrated VLSI CAD System: from Custom to Fully Automatic
Candidate: You, Yongtao
Advisor(s): Siegel, Alan
Abstract:
This thesis describes an integrated CAD environment, which is intented to support almost all phases of the VLSI circuit design cycle, from high-level circuit description down to mask specification. Several VLSI CAD tools have been integrated together under the environment, including a multi-level simulator Msim, a hardware description language CHDL, some automatic placement tools, a schematic layout editor, and the UC Berkeley-developed geometry layout editor Magic.
The multi-level simulator Msim supports top-down design by allowing circuits whose components are described at different levels to be simulated together. The levels of circuit description currently supported include a hardware description language CHDL, which is a variant of the C programming language for circuit behavior descriptions, a schematic layout representation, and the Magic layout from which masks for wafer fabrication can be generated.
The schematic layout editor allows designers to specify interconnections among circuit components in a very efficient manner. It supports both behavioral descriptions and high level geometric layout of a circuit. Designers can have a graphical view of their design, and specify, within this graphical organization, the behavioral description of components at different levels of abstraction. These schematic layouts with different levels of representation can be simulated using the multi-level simulator Msim.
The automatic placement tool presently performs bottom-up iterative improvement, with simulated annealing as its assistant when needed. An interactive graphics interface is provided which allows human intervention on intermediate as well as final layouts.
In addition, the linear (true) charge-sharing modeling problem with indeterminate transistor switches is shown to be NP-Complete, which explains why it is integrated exclusively within the lattice model for our switch-level simulation.
-
TR1991-583
1991
Domain Decomposition Algorithms for the Biharmonic Dirichlet Problem
Zhang, X.
Abstract
|
PDF
Title: Domain Decomposition Algorithms for the Biharmonic Dirichlet Problem
Author(s): Zhang, X.
Abstract:
We consider additive Schwarz methods for the biharmonic Dirichlet problem and show that the algorithms have optimal convergence properties for some conforming finite elements. Some multilevel methods are also discussed.
A class of multilevel methods for second order problems is considered in the additive Schwarz framework. It is established that, in the general case, the condition number of the iterative operator grows at most linearly with the number of levels. The bound is independent of the mesh sizes and the number of levels under a regularity assumption. This is an improvement of a result by Dryja and Widlund on a multilevel additive Schwarz algorithm, and the theory given by Bramble, Pasciak and Xu for the BPX algorithm.
Additive Schwarz and iterative substructuring algorithms for the biharmonic equation are also considered. These are domain decomposition methods which have previously been developed extensively for second order elliptic problems by Bramble, Pasciak and Schatz, Dryja and Widlund and others.
Optimal convergence properties are established for additive Schwarz algorithms for the biharmonic equation discretized by certain conforming finite elements. The number of iterations for the iterative substructuring methods grows only as the logarithm of the number of degrees of freedom associated with a typical subregion. It is also demonstrateed that it is possible to simplify the basic algorithms. This leads to a decrease of the cost but not of the rate of convergence of the iterative methods. In the analysis, new tools are developed to deal with Hermitian elements. Certain new inequalities for discrete norms for finite element spaces are also used.
- TR1991-582 1991 Multilevel Additive Schwarz Methods Zhang, X. Abstract | PDF
- TR1991-584 1991 Studies in Domain Decomoposition: Multilevel Methods and the Biharmonic Dirichlet Problem Zhang, X. Abstract | PDF
-
Ph.D. Thesis
1991
Edge representation from wavelet transform maxima
Zhong, Sifen
Abstract
|
PDF
Title: Edge representation from wavelet transform maxima
Candidate: Zhong, Sifen
Advisor(s): Mallat, Stephane
Abstract:
The multiscale edges of a signal are the sharp variation points measured at different scales. This thesis studies a model of multiscale edge representation based on the local maxima wavelet transform. The wavelet transform is a mathematical formulation of a multiscale decomposition. It decomposes a signal into multiple components indexed by a scale parameter. A particular class of wavelets are used such that each of these components is the first derivative of a smooth version of the signal, with the scale parameter indicating the degree of smoothing. The local maxima of this wavelet transform is therefore a multiscale edge representation. This thesis shows that the local maxima not only identify the edges but also characterize the edges. An algorithm to reconstruct a signal from its local maximum representation is developed. The experimental results show that the algorithm reconstructs the original signal, and this reconstruction is stable. This implies that the local maximum representation is a reorganization of the signal information. Therefore, various pattern analysis algorithms can be developed uniquely based on the properties of edges. Image processing can also be done through the multiscale edge representation. An application to image coding is described.
-
TR1990-532
1990
On Triangulations of the 3-Ball and the Solid Torus
Bohus, G.;
Jockush, W.; Lee, C.; Prabhu, N.
Abstract
|
PDF
Title: On Triangulations of the 3-Ball and the Solid Torus
Author(s): Bohus, G.; Jockush, W.; Lee, C.; Prabhu, N.
Abstract:
We show that neither the 3-ball nor the solid torus admits a triangulation in which (i) every vertex is on the boundary, and (ii) every tetrahedron has exactly one triangle on the boundary. (Such triangulations are relevant to an unresolved conjecture of Perles.) Our result settles a question posed at the DIMACS Workshop on Polytopes and Convex Sets.
- TR1990-520 1990 Stable Perturbations of Nonsymmetric Matrices Burke, J. Abstract | PDF
-
TR1990-506
1990
Domain Decomposition Algorithms for Indefinite Elliptic Problems
Cai, X.;
Widlund, O.
Abstract
|
PDF
Title: Domain Decomposition Algorithms for Indefinite Elliptic Problems
Author(s): Cai, X.; Widlund, O.
Abstract:
Iterative methods for the linear systems of algebraic equations arising from elliptic finite element problems are considered. Methods previously known to work well for positive definite, symmetric problems are extended to certain nonsymmetric problems, which also can have some eigenvalues in the left half plane.
We first consider an additive Schwarz method applied to linear, second order, symmetric or nonsymmetric, indefinite elliptic boundary value problems in two and three dimensions. An alternative linear system, which has the same solution as the original problem, is derived and this system is then solved by using GMRES, an iterative method of conjugate gradient type. In each iteration step, a coarse mesh finite element problem and a number of local problems are solved on small, overlapping subregions into which the original region is subdivided. We show that the rate of convergence is independent of the number of degrees of freedom and the number of local problems if the coarse mesh is fine enough. The performance of the method is illustrated by results of several numerical experiments.
We also consider two other iterative method for solving the same class of elliptic problems in two dimensions. Using an observation of Dryja and Widlund, we show that the rate of convergence of certain iterative substructuring methods deteriorates only quite slowly when the local problems increase in size. A similar result is established for Yserentant's hierarchical basis method.
- TR1990-512 1990 Tight Bounds on the Complexity of the Boyer-Moore Pattern Matching Algorithm Cole, R. Abstract | PDF
-
TR1990-510
1990
On the Optimal Design of Columns Against Buckling
Cox, S.;
Overton, M.
Abstract
|
PDF
Title: On the Optimal Design of Columns Against Buckling
Author(s): Cox, S.; Overton, M.
Abstract:
We establish existence, derive necessary conditions, and construct and test an algorithm for the maximization of a column's Euler buckling load under a variety of boundary conditions over a general class of admissible designs. We prove that symmetric clamped-clamped columns possess a positive first eigenfunction and introduce a symmetric rearrangement that does not decrease the column's buckling load. Our necessary conditions, expressed in the language of Clarke's generalized gradient, subsume those proposed by Olhoff and Rasmussen, Masur, and Seiranian. The work of Olhoff and Rasmussen, Masur, and Seiranian sought to correct the necessary conditions of Tadjbakhsh and Keller who had not foreseen the presence of a multiple least eigenvalue. This remedy has been hampered by Tadjbakhsh and Keller's miscalculation of the buckling loads of their clamped-clamped and clamped-hinged columns. We resolve this issue in the appendix.
In our numerical treatment of the associated finite dimensional optimization problem we build on the work of Overton in devising an efficient means of extracting an ascent direction from the column's least eigenvalue. Owing to its possible multiplicity this is indeed a nonsmooth problem and again the ideas of Clarke are exploited.
-
TR1990-534
1990
Physical Idealization as Plausible Inference
Davis, E.
Abstract
|
PDF
Title: Physical Idealization as Plausible Inference
Author(s): Davis, E.
Abstract:
The analysis of physical systems almost always relies on idealized models of the objects involved. Any idealization, however, will be incorrect or insufficiently accurate some of the time. It seems reasonable, therefore, to view a physical idealization as a defeasible inference which can be withdrawn in the presence of contrary evidence. This talk discusses the consequences of such a view.
We focus on examples where a system may or may not go into a state where idealizations are violated, such as dropping a ball near an open switch connected across a battery. We show that:
- Non-monotonic logics will try to enforce the idealization by supposing that the ball will miss the switch. This anomaly does not seem to be solvable by the kinds of techniques that have been applied to the Yale Shooting Problem, which it superficially resembles. We show that this problem is analogous to anomalies in non-monotonic logic that are time-independent.
- A probabilistic analysis is possible, but it relies on independence assumptions that are hard to justify in general. 3. For completely specified systems, the rule "If the idealization gives solvable equations, then assumes that it holds" is, in fact, a monotonic system of inferences. It should therefore be possible to characterize this in a purely deductive theory. We show that this is, indeed, possible for simple cases, but can get messy in complex systems.
- For completely specified systems, the rule "If the idealization gives solvable equations, then assumes that it holds" is, infact, a monotonic system of inferences. It should therefore be possible to characterize this in a purely deductive theory. We show that this is, indeed, possible for simple cases, but can get messy in complex systems.
- Programs that make physical predictions can avoid these problems by simply avoiding reasoning from the future to the past. Though most current programs observe this restriction, it seems likely that more powerful and general systems will have to violate it, and thus deal with this issue.
- Finally, we look at dynamic systems where the idealization can be observed at any single instant, but it is inconsistent over extended intervals.
-
Ph.D. Thesis
1990
Detecting Nondeterminism in Shared Memory Parallel Programs
Dinning, Anne
Abstract
|
PDF
Title: Detecting Nondeterminism in Shared Memory Parallel Programs
Candidate: Dinning, Anne
Advisor(s): Mishra, Bud
Abstract:
This thesis addresses the problem of detecting of a specific type of nondeterminism in shared memory parallel programs known as access anomalies. An access anomaly occurs when an update to a shared variable X is concurrent with either a read of X or another update of X.
The first part of the work considers dynamic detection of access anomalies. We introduce a new technique called task recycling that detects access anomalies "on the fly" by monitoring the program execution. This technique is designed with two goals in mind. The first goal is minimal monitoring overhead. Costs are incurred only at thread create, terminate, and coordinate operations cind every time a monitored variable is accessed. Because variable accesses are generally the most frequent operation, the task recycling technique reduces the overhead per variable access to a small constant. The second goal is generality. The task recycling technique is appllicable to a wide variety of parallel constructs find all common synchronous and asynchronous coordination primitives. Combined with a protocol for specifying ordering constraints, the method of representing concurrency relationships in task recycling cam be extended to detect general race conditions in parallel programs.
The second pait of the thesis involves static detection of several types of nondeterminism that makes dynamic anomcily detection inefficient. In particulair, the notion of nondeterminism arising from critical section coordination is refined by distinguishing between three types of nondeterminism parallel, sequential, and reference nondeterminism. The presence of these types of nondeterminism in a program impacts access anomaly detection in two significant ways: (i) how critical section coordination is modeled during anomaly detection, and (ii) the confidence level and complexity of guaranteeing that a program has no access anomalies. In particular, it is shown that access anomalies can be detected efficiently only if a program is parallel, sequential and reference deterministic. Heuristics are presented that make access anomaly detection tractable in the presence of other nondeterminism through a better classification amd semantic understanding of a coordination protocol.
- TR1990-498 1990 Performance of Shared Memory in a Parallel Computer Donovan, K. Abstract | PDF
-
TR1990-507
1990
Multilevel Additive Methods for Elliptic Finite Element Problems
Dryja, M.;
Widlund, O.
Abstract
|
PDF
Title: Multilevel Additive Methods for Elliptic Finite Element Problems
Author(s): Dryja, M.; Widlund, O.
Abstract:
An additive variant of the Schwarz alternating method is discussed. A general framework is developed, which is useful in the design and analysis of a variety of domain decomposition methods as well as certain multigrid methods. Three methods of this kind are then considered and estimates of their rates of convergence given. One is a Schwarz-type method using several levels of overlapping subregions. The others use multilevel, multigrid-like decompositions of finite element spaces and have previously been considered by Yserentant and Bramble, Pasciak and Xu. Throughout, we work with finite element approximations of linear, self-adjoint, elliptic problems.
-
TR1990-529
1990
Substructuring Methods for Parabolic Problems
Dryja, M.
Abstract
|
PDF
Title: Substructuring Methods for Parabolic Problems
Author(s): Dryja, M.
Abstract:
Domain decomposition methods without overlapping for the approximation of parabolic problems are considered. Two kinds of methods are discussed. In the first method systems of algebraic equations resulting from the approximation on each time level are solved iteratively with a Neumann-Dirichlet preconditioner. The second method is direct and similar to certain iterative methods with a Neumann-Neumann preconditioner. An analysis of convergence of the methods is presented.
-
TR1990-531
1990
Cutting a Polytope
Jockush, W.;
Prabhu, N.
Abstract
|
PDF
Title: Cutting a Polytope
Author(s): Jockush, W.; Prabhu, N.
Abstract:
We show that given two vertices of a polytope one cannot in general find a hyperplane containing the vertices, that has two or more facets of the polytope in one closed half-space. Our result refutes a long-standing conjecture.
We prove the result by constructing a 4-dimensional polytope that provides the counter-example. Also, we show that such a cutting hyperplane can be found for each pair of vertices, if the polytope is either simplicial or 3-dimensional.
-
TR1990-503
1990
Tree Locking on Changing Trees
Lanin, V.;
Shasha, D.
Abstract
|
PDF
Title: Tree Locking on Changing Trees
Author(s): Lanin, V.; Shasha, D.
Abstract:
The tree locking protocol is a deadlock-free method of concurrency control defined and verified by Silberschatz and Kedem for data organized in a directed tree. Can the tree protocol work for applications that change the tree? We define a set of three operations capable of changing any tree to any other tree and show that the tree protocol continues to ensure serializability and deadlock-freedom in the presence of these operations.
-
TR1990-518
1990
Dag Representation and Optimization of Rewriting
Li, K.
Abstract
|
PDF
Title: Dag Representation and Optimization of Rewriting
Author(s): Li, K.
Abstract:
In all applications of term rewriting systems, computing the normal forms of terms is the fundamental step. In this paper, we propose a directed acyclic graph (dag) representation for term and term rewriting. The rewriting on dags is much more efficient than the rewriting on trees. We design several efficient strategies for rewriting on dags. By dag representation, we can even obtain efficient rewriting strategies for non-left-linear rewriting systems.
-
Ph.D. Thesis
1990
Program transformation for efficient derivation of multiple solutions in concurrent logic languages
Markantonatos, Nikolaos
Abstract
|
PDF
Title: Program transformation for efficient derivation of multiple solutions in concurrent logic languages
Candidate: Markantonatos, Nikolaos
Advisor(s): Harrison, Malcolm C.
Abstract:
Concurrent logic languages provide a flexible and powerful vehicle for expressing parallel programs using explicit processes. However, their drastic departure from conventional logic programming with respect to completeness renders them unsuitable for a variety of useful applications involving search. A multiple solution extension to concurrent logic languages appears to successfully obtain the effect of backtracking in a parallel environment, but has been impeded by inefficiency problems. Moreover, the multiple solution subset introduces a new language which is incoherent with the single solution base language. We propose a multiple solution subset definition that adheres to the base language both syntactically and semantically. Subsequently, we advocate a source-to-source transformational approach for the efficient implementation of the subset. Multiple solution programs are converted at compile-time into equivalent single solution programs that derive all possible solutions into a single list. Alternative solutions are obtained in an eager or lazy fashion as specified by the program. A number of multiple solution program classes that are transformable into efficient single solution programs are identified and the corresponding transformation procedures are presented and further illustrated using a variety of examples. The techniques employed for the various transformations include partial evaluation, abstract interpretation, continuation-based transformation, layered stream transformation and loop fusion. As a result of such a static transformational methodology, a broad range of multiple solution programs enjoy efficient execution. We believe that our approach forms a definite step towards an efficient multiple solution subset for concurrent logic languages.
-
Ph.D. Thesis
1990
Data structures and algorithms for hierarchical memory machines
Mirza, Mirza G. R.
Abstract
|
PDF
Title: Data structures and algorithms for hierarchical memory machines
Candidate: Mirza, Mirza G. R.
Advisor(s): Siegel, Alan
Abstract:
This thesis analyzes the influence of hierarchical memory in models of practical computation. While hierarchical memory is the standard in real computing systems, the most common models of computation, Random Access Memory Machines and Turing Machines, do not reflect this form of memory. Our main contributions are: (1) Models of computation that have memory hierarchy, and which provide a rich structure for the complexity analysis of real computational problems. (2) Optimal bounds for problems such as sorting, with respect to both space and time, for a variety of memory access costs. (3) Related bounds for other problems, including constrained multitape merging and the implementation of Priority Queues and B-Trees. (4) The introduction of multiprogramming and multiprocessing concepts for these models, and an analysis of their relative computational power.
-
TR1990-523
1990
Execution of Regular DO Loops on Asynchronous Multiprocessors
Ouyang, P.
Abstract
|
PDF
Title: Execution of Regular DO Loops on Asynchronous Multiprocessors
Author(s): Ouyang, P.
Abstract:
This paper studies issues concerning parallel execution of regular Fortran DO loops on an asynchronous shared-memory multiprocessor, where each iteration is the basic unit to be executed by a single processing element. An iteration is a dependent predecessor of another iteration if execution of the latter iteration has to wait until execution of the former iteration has completed. During the execution of a DO loop, an iteration will pass through four states, namely, idle, pending, ready, and finished states. An iteration is idle if none of its dependent predecessors have completed; an iteration is pending if some of its dependent predecessors have completed, but not all; an iteration is ready if all its dependent predecessors have completed, but itself has not; otherwise, an iteration is finished. In addition, an iteration without any dependent predecessors is called an initial iteration, which can only have ready and finished states. Via describing an execution scheme, this paper studies the characteristics of Fortran DO loops which are related to the efficiency of the execution. Specifically, this paper investigates (1) the number of initial iterations, (2) the maximum number of ready iterations at any instances during the execution, (3) the maximum number of pending iterations at any instances during the execution, (4) a hash function to disperse different pending iterations, and (5) the parallel execution time.
-
TR1990-505
1990
Large-Scale Optimization of Eigenvalues
Overton, Michael L.
Abstract
|
PDF
Title: Large-Scale Optimization of Eigenvalues
Author(s): Overton, Michael L.
Abstract:
Optimization problems involving eigenvalues arise in many applications. Let x be a vector of real parameters and let A(x) be a differentiable symmetric matrix function of x. We consider a particular problem which occurs frequently: the minimization of the maximum eigenvalue of A(x), subject to linear constraints and bounds on x. The eigenvalues of A(x) are not differentiable at points x where they coalesce, so the optimization problem is said to be nonsmooth. Furthermore, it is typically the case that the optimization objective tends to make eigenvalues coalesce at a solution point.
There are three main purposes of the paper. The first is to present a clear and self-contained derivation of the Clarke generalized gradient of the max eigenvalue function in terms of a "dual matrix". The second purpose is to describe a new algorithm, based on the ideas of a previous paper by the author (SIAM J. Matrix Anal. Appl. 9 (1988) 256-268), which is suitable for solving large-scale eigenvalue optimization problems. The algorithm uses a "successive partial linear programming" formulation which should be useful for other large-scale structured nonsmooth optimization problems as well as large-scale nonlinear programming with a relatively small number of nonlinear constraints. The third purpose is to report on our extensive numerical experience with the new algorithm, solving problems which arise in the following application areas: the optimal design of columns against buckling; the construction of optimal preconditioners for numerical linear equation solvers; the bounding of the Shannon capacity of a graph. We emphasize the role of the dual matrix, whose dimension is equal to the multiplicity of the minimal max eigenvalue. The dual matrix is computed by the optimization algorithm and used for verification of optimality and sensitivity analysis.
-
Ph.D. Thesis
1990
Design and implementation of HyTeK: A knowledge-based hypertext system
Perez-Carballo, Jose F.
Abstract
|
PDF
Title: Design and implementation of HyTeK: A knowledge-based hypertext system
Candidate: Perez-Carballo, Jose F.
Advisor(s): Strzalkowski, Tomek; Shasha, Dennis
Abstract:
A Hypertext system is a text data base where the units of information are interlinked using pointers that the user can follow. We call the pointers explicit links (as opposed to computed or virtual links.) HyTeK provides a set of tools designed to help the user explore the information contained in the system. The information contained in the system is represented using at least one of the three following methods: fragments of full text, explicit links between fragments and a collection of frame-like objects organized in a taxonomy. Explicit links are used to represent discourse relationships between fragments of text. The frame-like objects, called Topics, represent concepts in the domain of the text contained in the fragments. Topics are used to index the fragments for retrieval. The taxonomy of Topics represents some of the relationships between fragments that a traditional Hypertext System would represent using explicit links. HyTeK's query system uses the taxonomy of Topics in order to implement tools that allow the user to retrieve fragments selectively by their contents. A user queries the system by building a set of Topics in an interactive process of reformulation. Query reformulation is supported by a set of tools that allow the user to explore the space of Topics. The relationships between the Topics are used to define a similarity measure which is used to rank the target set of the query. This work describes an automatic indexing scheme, a query system and an extension of the Knowledge Representation (KR) system NIKL (KLONE) that was used in HyTeK to implement the taxonomy of Topics. A prototype of HyTeK was implemented in Common-Lisp in a Symbolics 3645 running Genera 7.2. The system has been extensively tested on several test collections of a total of 1000 fragments of text about AIDS treatments. The results indicate clear advantages over traditional Information Retrieval systems and suggest that the use of a KR system for the implementation of a query module for a Hypertext System is promising.
-
Ph.D. Thesis
1990
On a generalization of Herbrand's theorem
Policriti, Alberto
Abstract
|
PDF
Title: On a generalization of Herbrand's theorem
Candidate: Policriti, Alberto
Advisor(s): Davis, Martin D.
Abstract:
In this thesis we prove a generalized version of Herbrand's theorem. Our result guarantees the existence of a semi-decision procedure a la Herbrand for testing unsatisfiability with respect to a give theory T, in which the decision procedure used at the ground level depends upon T. This is opposed to the classical case in which procedure used at the ground level is simply a test for propositional satisfiability. The problem of finding suitable analogues for the general case of the exhaustive search procedures is also tackled, and one such generalization is proposed. The underlying motivation for this study was to find theoretical results that could provide the basis for a set-theoretic proof checker. Thus, the case of set theory is considered in more detail. In particular, decidability and undecidability results for classes of set-theoretic, purely universal formulae are proved.
-
TR1990-533
1990
On a Conjecture of Micha Perles
Prabhu, N.
Abstract
|
PDF
Title: On a Conjecture of Micha Perles
Author(s): Prabhu, N.
Abstract:
We prove a conjecture of Micha Perles concerning simple polytopes, for a subclass that properly contains the duals of stacked and crosspolytopes. As a consequence of a special property of this subclass it also follows that, the entire combinatorial structure of a polytope in the subclass can be recovered from its graph, by applying our results recursively.
-
Ph.D. Thesis
1990
Space-variant computer vision with a complex-logarithmic sensor geometry
Rojer, Alan S.
Abstract
|
PDF
Title: Space-variant computer vision with a complex-logarithmic sensor geometry
Candidate: Rojer, Alan S.
Advisor(s): Schwartz, Eric
Abstract:
The complex logarithm as a conformal mapping has drawn interest as a sensor architecture for computer vision due to its psuedo-invariance with respect to rotation and scaling, its high ratio of field width to resolution for a given number of pixels, and its utilization in biological vision as the topographic mapping from the retina to primary visual cortex. This thesis extends the computer vision applications of the complex-logarithmic geometry. Sensor design is based on the complex log mapping w = log (z + a), with real a $>$ 0, which smoothly removes the singularity in the log at the origin. Previous applications of the complex-logarithmic geometry to computer vision, graphics and sensory neuroscience are surveyed. A quantitative analysis of the space complexity of a complex-logarithmic sensor as a function of map geometry, field width and angular resolution is presented. The computer-graphic problems of warping uniform scenes according to the complex logarithm and inversion of log-mapping scenes to recover the original uniform scene are considered, as is the problem of blending the resulting inverse log maps to reconstruct the original (uniform) scene. A series of simple algorithms for segmentation of log scenes by contour completion and region filling are presented. A heuristic algorithm for figure/ground segmentation using the log geometry is also shown. The problem of fixation-point selection (visual attention) is considered. Random selection of fixation points, inhibition around previous fixations, spatial and temporal derivatives in the sensor periphery, and regions found by segmentation are all examined as heuristic attentional algorithms. For the special case where targets can be parametrically defined, a theory of model-based attention based on the Hough transform is introduced. A priori knowledge about the consistency between potential objects in the scene and measured features in the scene is used to select fixation points. The exponential storage requirements of the usual Hough transform are avoided.
-
Ph.D. Thesis
1990
SAGE: A real-time operating system for robotic supervisory control
Salkind, Louis K.
Abstract
|
PDF
Title: SAGE: A real-time operating system for robotic supervisory control
Candidate: Salkind, Louis K.
Advisor(s): Mishra, Bud
Abstract:
The next generation of robotic applications--computer integrated manufacturing, teleoperation, and mobile autonomous robots--will require far more computer systems support than currently available. In particular, real-time supervisory control systems will be needed to integrate an increasing number of sensors and actuators, as well as to communicate with other computers in a distributed environment. This thesis describes the design and implementation of SAGE, an operating system built specifically for real-time robotic supervisory control. The SAGE kernel runs on off-the-shelf Motorola 68020 processor boards, and features lightweight processes, virtual memory support, extensible low-overhead synchronization primitives, and real-time communications capabilities. Because SAGE is one of the first systems built for robotic supervisory control, the thesis focuses on the issues and design tradeoffs that arise in building a supervisory control operating system. The thesis also describes how SAGE was used to control a number of intelligent devices, including a Utah/MIT hand and a PUMA robot arm. The robotic experiments performed demonstrate that the operating system can be used in real-time supervisory control applications.
-
TR1990-514
1990
Beyond Fail-Stop: Wait-Free Serializability and Resiliency in the Presence of Slow-Down Failures
Shasha, D.;
Turek, J.
Abstract
|
PDF
Title: Beyond Fail-Stop: Wait-Free Serializability and Resiliency in the Presence of Slow-Down Failures
Author(s): Shasha, D.; Turek, J.
Abstract:
Historically, database researchers have dealt with two kinds of process failures: fail-stop failures and malicious failures. Under the fail-stop assumption, processes fail by halting. Such failures are easily detectable. Under the malicious (or Byzantine) failure assumption, processes fail by behaving unpredictably, perhaps as adversaries. Such failures are not necessarily detectable. When system designers discuss fault tolerance, they typically restrict themselves to the problem of handling fail-stop failures only. This paper proposes an intermediate failure model and presents a practical algorithm for handling transactions under this model. The new failure model allows processes to fail by either slowing down or stopping; slow processes may later speed up, continue to proceed slowly, or, (eventually) stop. We call such failures slow-down failures. The model does not assume the ability to distinguish among these possibilities, say, by using a timeout mechanism, nor does it assume that it is possible to kill a slow process. Our algorithm, instead, allows for a new process to be dispatched to do the job that had been assigned to a slow process. The problem is that several processes may end up doing the same task and interfere with one another. Our algorithm controls such interference while guaranteeing both serializability and resiliency.
-
TR1990-519
1990
A Domain Decomposition Algorithm for Elliptic Problems in Three Dimensions
Smith, B.
Abstract
|
PDF
Title: A Domain Decomposition Algorithm for Elliptic Problems in Three Dimensions
Author(s): Smith, B.
Abstract:
Most domain decomposition algorithms have been developed for problems in two dimensions. One reason for this is the difficulty in devising a satisfactory, easy-to-implement, robust method of providing global communication of information for problems in three dimensions. Several methods that work well in two dimensions do not perform satisfactorily in three dimensions.
A new iterative substructuring algorithm for three dimensions is proposed. It is shown that the condition number of the resulting preconditioned problem is bounded independently of the number of subdomains and that the growth is quadratic in the logarithm of the number of degrees of freedom associated with a subdomain. The condition number is also bounded independently of the jumps in the coefficients of the differential equation between subdomains. The new algorithm also has more potential parallelism than the iterative substructuring methods previously proposed for problems in three dimensions.
-
TR1990-517
1990
Domain Decomposition Algorithms for the Partial Differential Equations of Linear Elasticity
Smith, B.
Abstract
|
PDF
Title: Domain Decomposition Algorithms for the Partial Differential Equations of Linear Elasticity
Author(s): Smith, B.
Abstract:
The use of the finite element method for elasticity problems results in extremely large, sparse linear systems. Historically these have been solved using direct solvers like Choleski's method. These linear systems are often ill-conditioned and hence require good preconditioners if they are to be solved iteratively. We propose and analyze three new, parallel iterative domain decomposition algorithms for the solution of these linear systems. The algorithms are also useful for other elliptic partial differential equations.
Domain decomposition algorithms are designed to take advantage of a new generation of parallel computers. The domain is decomposed into overlapping or non-overlapping subdomains. The discrete approximation to a partial differential equation is then obtained iteratively by solving problems associated with each subdomain. The algorithms are often accelerated using the conjugate gradient method.
The first new algorithm presented here borrows heavily from multi-level type algorithms. It involves a local change of basis on the interfaces between the substructures to accelerate the convergence. It works well only in two dimensions.
The second algorithm is optimal in that the condition number of the iteration operator is bounded independently of the number of subdomains and unknowns. It uses non-overlapping subdomains, but overlapping regions of the interfaces between subdomains. This is an additive Schwarz algorithm, which works equally well in two or three dimensions.
The third algorithm is designed for problems in three dimensions. It includes a coarse problem associated with the unknowns on the wirebaskets of the subdomains. The new method offers more potential parallelism than previous algorithms proposed for three dimensional problems since it allows for the simultaneous solution of the coarse problem and the local problems.
-
Ph.D. Thesis
1990
The APRAM: A model for asynchronous parallel computation
Zajicek, Ofer
Abstract
|
PDF
Title: The APRAM: A model for asynchronous parallel computation
Candidate: Zajicek, Ofer
Advisor(s): Cole, Richard
Abstract:
It is becoming increasingly clear that parallel computers will play a significant role in the area of computer science and its applications. In order to develop parallel machines and in order to be able to take advantage of them as they become available it is important to understand the issues underlying parallel computation. This thesis investigates one such issue, the synchronization costs of shared memory parallel computation. It defines the APRAM model, an asynchronous variation of the PRAM model, and analyzes a number of fundamental algorithms in this model; it uses three different complexity measures. The first part of the thesis defines the rounds complexity. It describes the complexity of an algorithm as a function of the slowest process. It is used to measure the explicit costs of synchronization: the cost of executing extra code in order to achieve synchronization. Three algorithms are analyzed under this complexity measure: a tree based summation algorithm; a list based recursive doubling algorithm; and an algorithm for computing the connected components of an undirected graph. In all three cases it is shown that global synchronization can be replaced by local synchronization thereby reducing the explicit costs of synchronization. The connectivity algorithm is significantly more substantial than the other two. We avoid the need to synchronize the processes, thereby obtaining an algorithm whose behavior appears somewhat chaotic. Due to its apparently chaotic nature and the unpredictability of the asynchronous environment, its analysis is quite challenging. In an asynchronous environment processes may proceed at different speeds. In the second part of the thesis we model the non-uniformity of the environment by defining the speeds of the processes to be random variables with a known probability distribution. We then quantify conditions under which asynchronous execution may have a significant advantage over a lock step execution, even if the explicit costs of a lock step execution are ignored. Both the summation algorithm and the recursive doubling algorithm are analyzed using two different probability distributions. In addition, we quantify conditions under which the list based recursive doubling algorithm is significantly faster than the tree based summation algorithm.
-
Ph.D. Thesis
1989
Combinatorial and algorithmic analysis of space decomposition problems
Aronov, Boris
Abstract
|
PDF
Title: Combinatorial and algorithmic analysis of space decomposition problems
Candidate: Aronov, Boris
Advisor(s): Sharir Micha
Abstract:
The first part of the thesis studies geodesic Voronoi diagrams. The closest-site (respectively, furthest-site) Voronoi diagram of a finite set of sites in Euclidean space is a classical geometric structure, which partitions the space into a set of Voronoi cells, each associated with a site, so that any point in the cell of site s is closer to s (resp. further from s) than to any other site. The structure of such diagrams for point sites in the plane has been completely characterized and well-known efficient algorithms exist for computing them. Voronoi diagrams have been generalized by replacing the Euclidean distance by a more general metric and/or relaxing the assumption that sites be single points. We consider the closest- and the furthest-site Voronoi diagrams for a set of k point sites in a simple n-gon, defined by the internal geodesic distance inside the polygon. We demonstrate that the planar map defined by either diagram is comprised of O(n + k) features of bounded complexity each and describe nearly optimal algorithms for constructing the two Voronoi diagrams. Namely, the closest-site geodesic Voronoi diagram can be computed in time O((n + k)log(n + k)log n), while O((n + k)log(n + k)) time is sufficient for the furthest-site diagram. The second part of the thesis analyzes the structure of an arrangement of flat triangles in 3-space. The combined combinatorial complexity of all non-convex cells (i.e., non-convex components of the complement of the union of the triangles), maximized over all arrangements of n triangles is shown to be roughly O($n\sp{7\over 3}$), improving the best previously known upper bound of O($n\sp{3-{1\over 49}}$) for a smaller quantity--the maximum combinatorial complexity of a single cell. Our result has applications to algorithmic motion planning, stemming from the well-known technique that transforms a polyhedral body translating in a polyhedral environment into a collection of convex polygonal plates in three-dimensional space; the set of placements of the body reachable from a starting configuration along a collision-free path corresponds to a cell in the arrangement of these plates. Thus analyzing the maximum combinatorial complexity of a single cell and obtaining a comparably efficient algorithm for its calculation constitutes a satisfactory solution to the translational motion planning just mentioned. To this end, we also consider the problem of computing a single cell or a subset of cells in a three-dimensional arrangement of triangles, providing a nearly worst-case optimal randomized algorithm for solving the former problem and a less efficient procedure for the latter. In addition, we examine a few special classes of arrangements for which better estimates on the maximum single-cell complexity can be deduced and where computing a cell or any collection of cells appears easier.
-
Ph.D. Thesis
1989
Data communication in robot control systems
Clark, Dayton R., Jr.
Abstract
|
PDF
Title: Data communication in robot control systems
Candidate: Clark, Dayton R., Jr.
Advisor(s): Mishra, Bud
Abstract:
Robots and robot controllers are becoming more sophisticated. Consequently, the demands on the controller's operating system are increasing. The lower levels of robot control systems (indeed, most real-time control systems) are characterized by servo loops. This thesis examines servo loops and how they affect data communications within robot control systems. In the two systems described in this thesis the special characteristics of servo loops are exploited to enhance the data communications. H scIC is an operating system for hierarchies of servo loops. It uses rate monotonic scheduling for the periodic servo loop processes. H sc IC events (or processes) which are used to implement servo loops are not allowed to block. They will only surrender the processor upon completion or when preempted by a higher priority process. A non-blocking communication structure, Periodic Data Buffers (PDB's) was developed for inter-process communication. H scIC has been implemented and is used successfully in a controller for the Utah/MIT hand. G scANGLIA is a proposed real-time communication network. It is intended to allow the processors in a robot controller to be distributed within the robot. Thus the processors can be close to the sensors and actuators they control. Much of the traffic on such a network would be periodic. G scANGLIA uses a central controller which allocates access to the network. For the periodic traffic a fixed schedule, produced off-line, is used. For the aperiodic traffic round-robin polling is used. Unlike most protocols, messages do not contain the address of the destination node. Instead, the messages are labeled with the name of its contents. Each node examines each message and decides whether or not it is interested in the message. A special communication controller in each node (the Communication Memory Management Unit) examines and selects the messages. The result of this protocol is a network-wide common memory. In this thesis, the G scANGLIA protocol is described in detail and some preliminary analysis of its effectiveness in some real robot systems is given.
-
Ph.D. Thesis
1989
On-line motion planning
Cox, James L.
Abstract
|
PDF
Title: On-line motion planning
Candidate: Cox, James L.
Advisor(s): Yap, Chee
Abstract:
In this thesis we investigate the area of online or exploratory motion planning. In this thesis we develop algorithms for planning the motion of a planar rod or ladder and a three link planar arm moving amidst an environment containing obstacles bounded by simple, closed polygons. The exact shape, number and location of the obstacles is assumed unknown to the planning algorithm, which can only obtain information about the obstacles by detecting points of contact with the obstacles. The ability to detect contact with obstacles is formalized by move primitives that we call guarded moves. We call ours the online motion planning problem as opposed to the usual offline version. This is a significant departure form the usual setting for motion planning problems, in which the algorithm is given an explicit description of the scene as part of its input. What we demonstrate is that the retraction method can be applied, although new issues arise that have no counterparts in the usual setting. For the rod we are able to obtain an algorithm with path complexity ($O(m) = O(n\sp2)$ guarded moves, where $n$ is the number of obstacle walls, and $m$ the number of pairs of obstacle walls and corners of distance less than or equal to the length of the ladder) that matches the known lower bound (Ork85). This lower bound holds for both the online and offline (where the environment is explicitly given) versions of the problem. The computational complexity of the algorithm $O(m$ log $n)$ matches the best known algorithm (SfS) for the offline version. For the arm we are able to obtain an algorithm with path complexity that is $O(m) = O(n\sp3)$ where $n$ is the number of obstacle walls and $m$ is the number of pairs of obstacle features that the linkage can simultaneously contact. The computational complexity is $O(n\sp3$log $n$). Also our constraint based approach can be extended to obtain algorithms for $k > 3$ link arms that are polynomial for each $k$. That is, if $k$ is fixed the complexity is proportional to $n\sp{k}$.
-
Ph.D. Thesis
1989
Quantitative analysis of problems in computer algebra: Grobner bases and the Nullstellensatz
Dube, Thomas William
Abstract
|
PDF
Title: Quantitative analysis of problems in computer algebra: Grobner bases and the Nullstellensatz
Candidate: Dube, Thomas William
Advisor(s): Yap, Chee
Abstract:
This thesis presents new quantitative results concerning multi-variate polynomial ideals. Since these ideals are the basic objects of (computational) algebraic geometry, these results have important ramifications in algebraic algorithms, particularly in the solving of simultaneous equations. Furthermore, all the new theorems are proven using only constructive techniques and basic algebra. In many cases, the proofs provide algorithms for constructing the objects which the theorems describe. Among the results assembled here, three are of particular importance. The first shows that every ideal and residue class ring can be decomposed into simple pieces called cones. Next, the cone decomposition is used to produce a new upper bound on the degree of polynomials which appear in a reduced Grobner basis. Finally, a new tight upper bound for the exponent in Hilbert's Nullstellensatz is demonstrated.
-
Ph.D. Thesis
1989
SMARTS--Shared-memory Multiprocessor Ada Run Time Supervisor
Flynn-Hummel, Susan Frances
Abstract
|
PDF
Title: SMARTS--Shared-memory Multiprocessor Ada Run Time Supervisor
Candidate: Flynn-Hummel, Susan Frances
Advisor(s): Schonberg, Edmond
Abstract:
The programming language Ada is primarily intended for the construction of large scale and real time systems. Although the tasking model of Ada was aimed mainly at embedded systems, its rich set of synchronization operators together with its support for programming in the large, make Ada increasingly attractive for writing inherently parallel, computationally intensive, numeric and symbolic applications. Highly parallel shared-memory MIMD machines such as the NYU Ultracomputer have traditionally been regarded as suitable for large-scale scientific code, and not for more symbolic or heterogeneous concurrent applications such as are found in Artificial Intelligence or real-time programming. However, these applications would benefit greatly from (and even require) the computational power provided by highly parallel machines. It is therefore desirable to develop Ada implementations for highly parallel machines. The concern has been that the cost of managing large numbers of Ada tasks will negate the speedup obtained from their parallel execution. Indeed, a run-time supervisor for Ada must contend with many potentially expensive serialization points, that is to say, constructs that may take time proportional to the number of tasks involved. In this thesis we show that a run-time supervisor for an implementation of Ada on highly parallel machines can be written which is free of costly serialization points. The run-time supervisor SMARTS (Shared-memory Multiprocessor Ada Run Time Supervisor) depends on the hardware synchronization primitive $fetch\&\Phi$, and supports the tasking features of Ada in a highly parallel manner. We further reduce the overhead of Ada tasking, by means of micro-tasking, i.e. the explicit scheduling of a family of Ada tasks on a specified number of processors. Thus, Ada tasks are implemented as light weight processes managed by SMARTS, rather than full blown operating systems processes. Finally, SMARTS implements Ada shared variables efficiently by means of relay sets. Relay sets not only provide a means for identifying and resolving references to shared variables, but also facilitate the implementation of the Ada rendezvous mechanism as a remote procedure call.
-
Ph.D. Thesis
1989
A computational treatment of the comparative
Friedman, Carol
Abstract
|
PDF
Title: A computational treatment of the comparative
Candidate: Friedman, Carol
Advisor(s): Grishman, Ralph
Abstract:
This thesis develops a computational treatment of the comparative in English that is general, efficient, and relatively easy to implement, while not unduly complicating the natural language processing system. Implementation was accomplished using the Proteus Question Answering System, which translates natural language questions into database queries. The comparative is a particularly difficult language structure to process, and presently only a few natural language systems handle it in limited ways. However, the comparative is an essential component of language that frequently occurs in discourse. The comparative is difficult to process because it corresponds to an amazingly diverse range of syntactic forms such as coordinate and subordinate conjunctions and relative clauses which are also very complex and often contain missing elements. Semantically, the comparative is cross-categorical: adjectives, quantifiers, and adverbs can have the comparative feature. The semantics of the comparative has to be consistent with that of different linguistic categories while retaining its own unique characteristics. The computational approach of this thesis is based on a language model which contains functionally independent syntactic, semantic, and pragmatic components. Although the comparative relates to all the components, the syntactic component is the one that is mainly affected. The syntactic stage of processing analyzes and regularizes the comparative structures. The analysis process utilizes existing mechanisms that handle structures similar to the comparative. The regularization process transforms all the different comparative structures into one standard form consisting of a comparative operator and two complete clauses. This process consists of two phases: the first uses a compositional approach based on Montague-style translation rules. The subsequent phase uses specialized procedures to complete the regularization process by expanding the comparative, filling in missing elements, and providing the appropriate quantified terms associated with the comparated elements. After the comparative is regularized, the remaining stages of processing are hardly affected. Each clause of the comparative is processed using the same procedures as usual, and only minor modifications are required specifically for the comparative.
-
Ph.D. Thesis
1989
Verification of three-dimensional model parameters from two-dimensional image data
Goldberg, Robert Raphael
Abstract
|
PDF
Title: Verification of three-dimensional model parameters from two-dimensional image data
Candidate: Goldberg, Robert Raphael
Advisor(s): Lowe, David
Abstract:
A unified approach is presented for instantiating model and camera parameters in the verification process of visual recognition. Recognition implies the generation of a hypothesis, a map between projected model data and image data. An important part of the problem remaining is the instantiation of model and camera parameters to verify the hypothesis. We present this camera pose determination as a non-linear least squares problem, with functions minimizing distance between the projected model and image data. This approach treats both camera and model parameters as the same, simplifying the camera/sensor calibration problem. Coordinate trees with null components, an original data structure, models the objects in the image. This allows the calculation of analytical partial derivatives (with respect to the parameters of model and camera). We discuss objective model functions that best suit general applications. The incorporation of various numeric techniques is analyzed, with tables displaying convergence results for various models and parameters. Good convergence results are obtained and this method can be integrated into general vision applications. No depth information is required, and the algorithms also hold in noisy images, adding much robustness to our techniques. A natural extension of these techniques is to instantiate the parameters of generally constrained models.
-
Ph.D. Thesis
1989
Topics in algebraic computing: Subresultants, GCD, factoring and primary ideal decomposition
Ho, Chung-Jen
Abstract
|
PDF
Title: Topics in algebraic computing: Subresultants, GCD, factoring and primary ideal decomposition
Candidate: Ho, Chung-Jen
Advisor(s): Yap, Chee
Abstract:
Our goal is to present an algorithm for computing a primary decomposition of a zero-dimensional ideal. We compute the decomposition of the radical ideal of the zero-dimensional ideal and lift it to a primary decomposition. The algorithm for decomposing radicals simply uses Kronecker's method of elimination and GCD and factoring algorithms. Kronecker's method of elimination and GCD computations are related to resultant systems and subresultants. Thus, we first investigate the theory of subresultants. We expound the theory of subresultants along the lines suggested by Loos. However, there were some major oversights in Loos's proof of the Subresultant Theorem. We point out where exactly Loos's proof fails and give a correct version of proofs. Then, we define the Sylvester matrix of many polynomials and explore the properties of the Sylvester matrix. By these properties, we derive fast parallel algorithms for computing the GCD of many polynomials. Our algorithms have better processor bound than Von zur Gathen's algorithm. Moreover, one of the algorithms uses no divisions. The factoring algorithm deals with factoring polynomials over multiple algebraic extensions of rational number field. We present an algorithm to find an integer $D$ such that the defect of an integral basis for a multiple extension of Q divides $D$. Though there is a naive algorithm to find a $D$ by translating a multiple extension to a simple extension, our algorithm has much better time and space bound than the naive algorithm. With this result, we can directly factor polynomials without translating a multiple extension to a simple extension. Finally, we improve Kronecker's method of elimination; and then, by applying the GCD and factoring algorithms on the resultant systems generated by Kronecker's method of elimination, we obtain a tree representation of all the associated prime ideals belonging to the zero-dimensional ideal.
-
Ph.D. Thesis
1989
Object recognition by geometric hashing
Lamdan, Yehezkel
Abstract
|
PDF
Title: Object recognition by geometric hashing
Candidate: Lamdan, Yehezkel
Advisor(s): Schwartz, Jacob T.; Wolfson, Haim J.
Abstract:
This thesis proposes a general and efficient model-based object recognition scheme. The scheme addresses the problem of identifying instances of model objects in single images. The model objects are two or three dimensional, and their instances in the scene might be overlapping and partially occluded by other unknown objects. The camera viewpoint is unknown and assumed to be arbitrary. The images can be two dimensional intensity images or three dimensional range images. The scheme deals uniformly with all feasible imaging transformations, from the simplest case of pure translation to the most complex case of the perspective transformation. The proposed method is based on geometric hashing. It hypothesizes model to scene transformations based on corresponding model and scene feature subsets. These subsets have the minimal cardinality, which still allow to recover the imaging transformation for a given transformation type. In order to prune the search space of all model and scene feature subset pairs, a hashing scheme is used. It is based on geometrical relations among the object features, which are invariant under the given transformation type. The recognition algorithm has two major steps. First, a hash-table, encoding the geometrical invariants of the model features, is prepared. This stage is independent of the scenes to be later processed, and can be executed off-line. In the second stage, an efficient matching algorithm is performed, which utilizes the previously prepared hash-table. The efficacy of the recognition is achieved by considering only those model and scene subsets, which are 'similar' under the given transformation type. The algorithm was tested in 'real-life' situations for the important cases of recognizing flat and solid objects in the 3D world, using the weak perspective approximation to the perspective transformation.
-
Ph.D. Thesis
1989
Mapping algorithms on regular parallel architectures
Lee, PeiZong
Abstract
|
PDF
Title: Mapping algorithms on regular parallel architectures
Candidate: Lee, PeiZong
Advisor(s): Kedem, Zvi
Abstract:
It is significant that many of time-intensive scientific algorithms are formulated as nested loops, which are inherently regularly structured. In this dissertation the relations between the mathematical structure of nested loop algorithms and the architectural capabilities required for their parallel execution are studied. The architectural model considered in depth is that of an arbitrary dimensional systolic array. The mathematical structure of the algorithm is characterized by classifying its data-dependence vectors according to the new ZERO-ONE-INFINITE property introduced. Using this classification, the first complete set of necessary and sufficient conditions for correct transformation of a nested loop algorithm onto a given systolic array of an arbitrary dimension by means of linear mappings is derived. Practical methods to derive optimal or suboptimal systolic array implementations are also provided. The techniques developed are used constructively to develop families of implementations satisfying various optimization criteria and to design programmable arrays efficiently executing classes of algorithms. In addition, a Computer-Aided Design system running on SUN workstations has been implemented to help in the design. The methodology, which deals with general algorithms, is illustrated by synthesizing linear and planar systolic array algorithms for matrix multiplication, a reindexed Warshall-Floyd transitive closure algorithm, and the longest common subsequence algorithm.
-
Ph.D. Thesis
1989
Transformations for backtracking SETL programs
Nathan, Albert
Abstract
|
PDF
Title: Transformations for backtracking SETL programs
Candidate: Nathan, Albert
Advisor(s): Dewar, Robert
Abstract:
We study program transformations for a class of combinatorial search problems whose solutions are usually found by backtrack searching. High-level algorithms for such problems can be elegantly specified using SETL's backtracking primitives ok and fail, for which we give a more formal and precise semantic definition than the one which currently exists. Then we explore two types of transformations applicable to such specifications. First, we derive Finite Differencing transformations which reduce the amount of computation performed at each node of the search tree. Though the formal derivation of these transformations is somewhat lengthy, the net results are simple and easily understood. In the process of deriving the transformations, we also expose some difficulties encountered when applying Finite Differencing methods to programs which use ok/fail. Second, we propose two general transformations which reduce the size of the search tree generated by pruning subtrees which are guaranteed to fail. The first one is based on the idea of using knowledge accumulated during the search to guide the search, while the second one prunes subtrees which contain no paths of sufficient length needed to extend the current partial solution to a complete solution. For each filter, we describe its enabling conditions, give a high-level specification, and then formally derive an efficient implementation using Finite Differencing. Finally, we suggest suitable representations, based on SETL's Data Representation Sublanguage, for implementing the data structures used in our transformations. We demonstrate the effectiveness of all these transformations by programming some familiar backtrack-search problems and comparing the running times and number of nodes generated in the transformed versions against those of the original specification. We also show some papers from the literature in which some suggestion of these transformations does appear, but in which (in contrast to this work) no formal demonstration of their correctness or applicability to other problem domains is given.
-
Ph.D. Thesis
1989
Optimization and garbage collection in Ada programs on shared memory computers
Operowsky, Howard Lawrence
Abstract
|
PDF
Title: Optimization and garbage collection in Ada programs on shared memory computers
Candidate: Operowsky, Howard Lawrence
Advisor(s): Schonberg, Edmond
Abstract:
Compiler development for Ada is still in its infancy. Despite its goal of supporting embedded systems in an efficient manner, Ada programs still tend to be large and slow. In this thesis, we investigate three issues related to the efficient implementation of Ada programs: run-time representation of types and objects, reduction of run-time constraint checking, and parallel garbage collection on a shared memory multiprocessor. We present a collection of type templates for scalar and composite types which are storage-efficient and allow for efficient object code to be produced by the code generator. We present an algorithm for constructing these templates at run-time when constraint information is unavailable at compile-time. We show that a global optimizer is not required to reduce the overhead of constraint checking in Ada programs. We present a series of data-flow equations for available expressions and use them as the basis for a simple algorithm to eliminate redundant constraint checks. The algorithm is syntax-directed and is executed in a single pass over the source program's abstract syntax tree. No control flow analysis is required. Our algorithm also includes constant propagation using an extended framework and induction variable analysis. Because the algorithm operates on the abstract syntax tree, induction variable analysis is simplified. Although programs with goto statements are not considered, the exit statement is handled fully. We also examine the effects of shared variables and exception handling. No commercial compiler for Ada currently performs garbage collection. We examine the difficulties in garbage collection presented by Ada and present practical algorithms for Ada on shared memory multiprocessors. We extend Kung and Song's on-the-fly garbage collection algorithm to support multiple tasks on the NYU Ultracomputer/IBM RP3 computers. We prove that no additional synchronization is required because of Ada's rules on the use of shared variables.
-
Ph.D. Thesis
1989
Using relational discrete event systems and models for prediction of future behavior of databases
Tuzhilin, Alexander Sergei
Abstract
|
PDF
Title: Using relational discrete event systems and models for prediction of future behavior of databases
Candidate: Tuzhilin, Alexander Sergei
Advisor(s): Kedem, Zvi
Abstract:
The following prediction problem is studied in this dissertation: given a specification of the future behavior of a system and the current state of the system described with a relational database, predict what will happen to the system in the future. The behavior is defined in terms of Relational Discrete Event Systems (RDESes) and Models (RDEMs). An RDES is a set of possible non-deterministic trajectories of future states of a system. An RDEM is a finite formal description of a generally infinite RDES set. Various production system RDEMs and a recurrence equation RDEM are defined and formally compared in terms of expressive power in this dissertation. It is shown that one of the production system RDEMs is better than other considered RDEMs not only in terms of expressive power but in other respects as well. Also, the suitability of various control strategies to restrict non-determinism and improve system's performance is considered. In order to obtain predictions about possible future states of a database, Predictive Query Language (PQL) is defined with the syntax based on a predicate temporal logic and the semantics on RDEM models. It is shown how PQL is related to relational queries for Datalog and its extensions. Finally, the prototype of the Cassandra system is described. Cassandra supports PQL with the semantics based on a production system RDEM. An example of a small Flexible Manufacturing System is used throughout the dissertation to illustrate various points about the described methods.
-
Ph.D. Thesis
1989
Fuzzy disk modeling and rendering of textured complex three-dimensional surfaces of real objects
Yang, Xue Dong
Abstract
|
PDF
Title: Fuzzy disk modeling and rendering of textured complex three-dimensional surfaces of real objects
Candidate: Yang, Xue Dong
Advisor(s): Perlin, Ken; Schwartz, Jacob T.
Abstract:
The three-dimensional geometric modeling in computer graphics is concerned with the representation, specification, and manipulation of free-form curves, surfaces, and volumes. This research explores a model for constructing representations of complex three-dimensional surfaces of real-world objects, such as sculptures in a museum, from sample points acquired with a special 3-D camera, and for synthesizing computer-generated pictures from this model. The difficulty of this problem comes from the complexity of the surface characteristics of such objects, which involve complicated irregular shapes and rich textures. This thesis presents a new three-dimensional surface model - three-dimensional fuzzy disk model, for computer graphics display. This model allows any curved surface to be approximated by a number of overlapping disks. A new blending method has been developed to generate smoothly curved surfaces from the overlapping disks. The shape of a blending surface can be controlled by varying some geometric parameters. This three-dimensional fuzzy disk representation is organized into a multi-resolution structure which allows adaptive refinement of surfaces details and supports coarse-to-fine display process. A scan-line rendering algorithm has been developed to synthesize images from the new model. We also present a simpler, less accurate, but more efficient approximation to the original model. In addition, we present a fast shadow penumbra approximation algorithm capable of generating soft shadows.
-
Ph.D. Thesis
1989
The editing distance between trees: Algorithms and applications
Zhang, KaiZhong
Abstract
|
PDF
Title: The editing distance between trees: Algorithms and applications
Candidate: Zhang, KaiZhong
Advisor(s): Shasha, Dennis
Abstract:
Trees are a ubiquitous building block in computer science and related fields. Examples are grammar parses, image descriptions, secondary structures of RNA molecules, and many other phenomena. Comparing trees is therefore useful to compare scenes, parses, and so on. This thesis presents algorithms for tree comparison and applications of those algorithms. We consider the distance between two labeled trees to be the weighted number of editing operations (insert, delete, and modify) to transform one tree to another. We show that for unordered trees this is a NP-Complete problem. For ordered trees we present a simple fast dynamic programming algorithm that is significantly better than the best previous published algorithms. We then show that our method provides a general technique for solving other related tree problems (e.g. approximate tree matching). We also present efficient parallel algorithms on the assumption that the costs be unit. One of our applications is to compare secondary structures of RNA molecules. We describe another application to vision that uses tree comparisons to compare shapes. We have also implemented some of the algorithms in the form of a tree comparison toolkit. The preliminary version of the toolkit has been used at the U.S. National Cancer Institute for the comparison of RNA secondary structures.
-
Ph.D. Thesis
1988
Parallel algorithms for band SPD systems of linear equations
Bar-On, Ilan
Abstract
|
PDF
Title: Parallel algorithms for band SPD systems of linear equations
Candidate: Bar-On, Ilan
Advisor(s): Widlund, Olof
Abstract:
In this thesis we consider parallel algorithms for solving band symmetric positive definite systems of linear equations where the number of equations is much larger than the band width. Such systems arise in many practical applications for the dynamic analysis of structures such as the design of dams, bridges, ships, supersonic jets etc. Sequential methods for solving these systems require intolerable turnaround times and hence the importance of fast parallel algorithms for solving them. Our main contribution in this thesis is the presentation of a new practical parallel algorithm. Our algorithm runs in O(m $\*$ log n) time using nm/log n processors where n is the number of equations and m the band width. Hence, the algorithm is efficient. For tridiagonal systems the algorithm runs in O(log n) time using n/log n processors. We also develop a theoretical faster algorithm that runs in O(log m log n) time using nm$\sp2$/(log m log n) processors. This algorithm is efficient and runs as fast as the best currently known theoretical method. In chapter one we introduce the basic principles of parallel computations. In chapter two we review the basic algebraic and numerical properties of matrix computations. Here, we present a new parallel efficient algorithm for adding n k-bits integers in O(log n + log k) time based on the Fibbonachi sequence. In chapter three we consider parallel methods for solving band triangular systems which arise from the L-U decomposition of A. We conclude that this method is not as efficient for parallel computers as for sequential ones. In chapter four, we give a new efficient parallel algorithm for inverting a s.p.d. matrix in O(log$\sp2$n) time. We then present our new parallel algorithm for solving band s.p.d. systems, analyse its complexity, and show its improvement over the odd-even reduction algorithm. We conclude by pointing to yet unresolved problems in this field.
-
Ph.D. Thesis
1988
ZLISP--a portable parallel LISP environment
Dimitrovsky, Isaac Aaron
Abstract
|
PDF
Title: ZLISP--a portable parallel LISP environment
Candidate: Dimitrovsky, Isaac Aaron
Advisor(s): Harrison, Malcolm C.
Abstract:
This thesis concerns ZLISP, a portable parallel LISP environment for shared memory MIMD supercomputers. ZLISP was created as a vehicle for experimenting with parallel symbolic computing on a variety of supercomputer designs. It is a small but reasonably powerful subset of COMMON LISP that includes arrays, strings, structures, most of COMMON LISP's control flow functions, and a native code compiler, among other features. A low-level, flexible set of parallel primitives is provided that can support a wide spectrum of parallel programming styles. ZLISP currently runs on the NYU Ultracomputer prototype. A version that simulates parallelism runs on VAX and SUN minicomputers. I begin this thesis by discussing ZLISP's design and implementation. I attempt to justify the more difficult design decisions made during the development of ZLISP. I also give some details on how the more unusual parts of ZLISP are implemented. The full ZLISP reference manual is included as an appendix. I then turn to some parallel algorithms of independent interest that were discovered during the development of ZLISP. Many of these algorithms use the faa (fetch-and-add) operation, a versatile low-level synchronization primitive that has been promoted by the NYU Ultracomputer group and incorporated in several other supercomputer designs. I first describe some of the parallel algorithms used to implement ZLISP. These include an algorithm for parallel garbage collection and an algorithm for efficiently using hash tables in a parallel garbage collected environment. Finally, I cover some parallel algorithms provided for use by ZLISP programmers. I define the group lock, a new synchronization primitive useful in writing asynchronous parallel algorithms, and give some examples of its use in such applications as parallel stacks, heaps, and databases. I also present an assortment of space efficient parallel data structures such as queues, multiqueues, and stacks.
-
Ph.D. Thesis
1988
Reasoning about shape and kinematic function in mechanical devices
Joskowicz, Leo
Abstract
|
PDF
Title: Reasoning about shape and kinematic function in mechanical devices
Candidate: Joskowicz, Leo
Advisor(s): Davis, Ernest
Abstract:
This thesis presents a general framework for reasoning about the relationship between the shape of a solid object and its kinematic function in a mechanical device. Such a framework is essential for numerous reasoning tasks concerning mechanical devices such as analysis, prediction of behavior, and design. We propose to use an intermediate representation that relates the geometry of objects to their kinematic function in a mechanism; this representation stems from the notion of configuration spaces, originally introduced for motion planning. We show that configuration spaces are an appropriate symbolic representation for reasoning about the kinematics mechanical devices because the regions of the mechanism's configuration space can be interpreted as representing all the qualitatively different possible motions its objects. Our theory supports both qualitative and causal reasoning. To describe kinematic behavior functionally, we begin by developing two functional languages: possible motions descriptions and causal descriptions. We then present a two-step analysis procedure that starts by deducing the behavior of all kinematic pairs and then composes these behaviors to obtain the overall behavior of the mechanism. For a subclass of mechanisms (fixed axes mechanisms), we show that a simplified version of the composition operation can be used to obtain the overall behavior, and we outline a constraint propagation, label inferencing algorithm to produce a region diagram. This diagram constitutes a total qualitative envisionment of the mechanism's reachable behaviors. Given a sequence of input motions and a region diagram, we indicate how to predict the behavior of the mechanism. In the second part of this thesis, we address the problem of designing the shape of physical objects defined by a set of functional requirements. In particular, we show how to design kinematic pairs from a description of their desired behavior. We provide a general heuristic algorithm for innovative shape design, and present a number of efficient algorithms for special design cases. We also show how to design kinematic pairs when a qualitative or incomplete description of the desired behavior is provided.
-
Ph.D. Thesis
1988
Use of three-dimensional curves in computer vision
Kishon, Eyal
Abstract
|
PDF
Title: Use of three-dimensional curves in computer vision
Candidate: Kishon, Eyal
Advisor(s): Schwartz, Jacob T.
Abstract:
The objective of this work is to study the use of 3-D curves in model based object recognition. We approach the two main problems of object recognition, i.e., model formation and matching in a unified way. We propose a framework in which 3-D curves will be used both to represent objects in a database of models, and then present algorithms that use these curves to perform efficient matching between an observed object and a previously prepared database of object models. The motivation for this work comes from the fact that 3-D curves can describe in a natural way the objects from which they were extracted. Moreover, the use of these curves in the matching process has proved to be highly accurate while at the same time very efficient. In this work we present algorithms to extract 3-D curves from a pair of range and intensity images, and then algorithms that classify and separate between the different types of curves. We will also present two efficient algorithms for matching 3-D curves.
-
Ph.D. Thesis
1988
Simulation-based understanding of texts about equipment
Ksiezyk, Tomasz Bartlomiej
Abstract
|
PDF
Title: Simulation-based understanding of texts about equipment
Candidate: Ksiezyk, Tomasz Bartlomiej
Advisor(s): Grishman, Ralph
Abstract:
This thesis presents a natural language understanding system, operating in the domain of equipment consisting of mechanical, hydraulic, and electrical elements. The task of the system is to analyze reports regarding the failure, diagnosis and repair of equipment. We argue that a general knowledge of equipment is not sufficient for a full understanding of such reports. As an alternative, we propose a system which relies on a detailed simulation model to support language understanding. We describe the structure of the model and emphasize features specifically required for language understanding. We show how this model can be used in analyzing and determining the referents for complex noun phrases describing equipment parts. We outline the data structures used for concepts which are mentioned in the text but which have no permanent representation in the model, and explain how they are created during the text analysis. Similarly, we discuss the data structures for representing the facts conveyed by the text, and provide algorithms for translating text expressing facts into their representations. We point out the importance of identifying the implicit temporal and causal relations in the text and show how the simulation capabilities of the model support this task. We present a dynamic graphical interface which gives the user insight into the way the input has been understood by the system. Finally, we indicate how our system may be extended to facilitate dynamic (i.e. during the analysis of text) extensions to its data base, and to assist the user in entering new equipment models. Most aspects of the discussed system were implemented on a Symbolics Lisp machine.
-
Ph.D. Thesis
1988
Extensions to SETL to support problem specification and transformation of imperative programs
Lewis, Henry Merriman
Abstract
|
PDF
Title: Extensions to SETL to support problem specification and transformation of imperative programs
Candidate: Lewis, Henry Merriman
Advisor(s): Dewar, Robert
Abstract:
Programming by transformation is a reliable and efficient way to develop algorithms. An ideal methodology begins with high-level specifications of the problem to be solved. Such dictions are by nature concise, easy to understand, and easy to verify. They are free from the details that determine the method by which the solution is found, yet promote transformations leading to derivation of solutions. The user of the transformation system applies refinements and modifications that transform the problem specifications into algorithm specifications, and so is able to derive programs that solve the original problem. We propose extensions to the set-theoretic programming language SETL to support problem specifications. The resulting language realizes the ideals of problem specifications. The resulting language realizes the ideals of problem specification, and further supports direct execution of the highest-level specifications as a search over a solution space. Its dictions are imperative at all levels of derivation, so as to provide consistency of style among all versions, from problems to programs. We show how dictions of the form find variables $\vert$ conditions serve to specify problems, and how transformation of the conditions promotes derivation of algorithms. We propose dictions that allow concise specification of problems that require minimization of a function, and a variant that allows specification of problems that are inherently non-rigorous, or whose solutions admit approximation or tolerance. We suggest transformations of expressions that lead to algorithms employing formal differentiation of expressions or dynamic programming. Through examples we show that the method of transformational programming constitutes a tool for the specification, derivation, and discovery of algorithms.
-
Ph.D. Thesis
1988
Foundations of a logic of knowledge, action, and communication
Morgenstern, Leora
Abstract
|
PDF
Title: Foundations of a logic of knowledge, action, and communication
Candidate: Morgenstern, Leora
Advisor(s): Davis, Ernest
Abstract:
Most Artificial Intelligence planners work on the assumption that they have complete knowledge of their problem domain and situation, so that planning an action consists of searching for an action sequence that achieves some desired goal. In actual planning situations, agents rarely know enough to map out a detailed plan of action when they start out. Instead, they initially draw up a sketchy plan and fill in details as they proceed. This thesis presents a formalism that is expressive enough to describe this flexible planning process. We address ourselves to two central issues: (1) How can an agent determine that he knows enough to do an action? (Knowledge Preconditions Problem) (2) If the agent does not know enough, how can he plan to get the action done? (Ignorant Agent Problem) We demonstrate that modal logic is too weak to serve as the basis for such a theory, and choose instead to work within a first order logic augmented with quotation. We then discuss the Knower Paradoxes that arise from such syntactic treatments of knowledge, and propose a solution to these paradoxes based on Kripke's solution to the Liar Paradox. Next, we present a theory of action and planning that is powerful enough to describe partial plans and joint-effort plans. We then explain what knowledge an agent must have in order to successfully perform an action and how an ignorant agent can construct and execute complex plans in order to overcome his ignorance. A central observation underlying our solution to the Ignorant Agent Problem is that ignorant agents tend to use communicative acts, such as asking for information, and delegating, to plan around their ignorance. During the final part of this thesis, we therefore develop a theory of communication as an integrated part of our theory of action and planning. We show that this theory of communication is more expressive than standard Austinian-type speech act theories. The thesis includes comparisons of our theory with other syntactic and modal theories such as Konolige's and Moore's. We demonstrate that our theory is powerful enough to solve classes of problems that these theories cannot handle.
-
Ph.D. Thesis
1988
Taliere: An interactive system for data structuring SETL programs
Straub, Robert Michael
Abstract
|
PDF
Title: Taliere: An interactive system for data structuring SETL programs
Candidate: Straub, Robert Michael
Advisor(s): Schonberg, Edmond
Abstract:
This thesis describes a system designed to aid SETL programmers in the selection of data structures for the representation of program variables. The system uses information from the SETL optimizer, and provided interactively by the programmer, to select from the set and map representations which are available to implement SETL objects. We begin by describing previous work on data structure selection for very high level languages, including the data structure selection performed by the SETL optimizer. We then present a general description of a system for data structure selection for SETL programs. We describe techniques used to obtain useful information from a source program. This includes obtaining symbolic estimates of the execution frequencies of individual program operation, and estimates of the sizes of program objects. The data structures considered by the system are then described. We present a detailed description of the data structure selection algorithm, along with optimizations and heuristics used to improve the execution efficiency of the data structuring system. We conclude with examples comparing choices made by the system with choices made by a competent programmer and speculate on the eventual success of semi-automatic structuring systems.
-
Ph.D. Thesis
1988
Operating system data structures for shared memory MIMD machines with fetch-and-add
Wilson, James M.
Abstract
|
PDF
Title: Operating system data structures for shared memory MIMD machines with fetch-and-add
Candidate: Wilson, James M.
Advisor(s): Gottlieb, Allan
Abstract:
Ideally, procedures and data structures on a shared-memory MIMD machine should be serialization-free and concurrently accessible to avoid (potential) performance-limiting bottlenecks. The fetch-and-add coordination primitive, in conjunction with combining interconnection networks, has been proposed as a means for achieving this goal. The first is essentially an indivisible add-to-memory and the second combines simultaneous requests to the same memory location. In this thesis we address serialization-free memory and process management for a shared-memory MIMD machine with fetch-and-add and a combining network. To meet this goal we adopt a self-service paradigm for the operating system that permits each processing element (PE) to service its own requests (thereby avoiding central server bottlenecks). The success of this approach depends upon the use of concurrently accessible data structures to hold data shared among the PEs. We begin by reviewing existing fetch-and-add based queue and multiqueue (a compressed queue) implementations that support concurrent queue insertion and deletion. We then extend these implementations to include a new operations (e.g., the removal of an interior queue item) and new data structure representations (e.g., linked lists). Parallel memory allocation algorithms, many based on the modified queue and multiqueue data structures, are then given. These algorithms include parallel analogs to a number of existing serial algorithms such as Knuth's boundary tag method and the binary buddy system. Next, we define a set of primitives that permit various task activities, such as creation and scheduling, to be done in parallel. Task-switching readers/writers and event primitives are given as well. In the readers/writers implementations, reader activity is fully parallel in the absence of writers. An important feature of both the readers/writers and event implementations is that tasks waiting for a resource can be resumed in parallel by multiple PEs. We then demonstrate how high-level parallel programming constructs (e.g., parallel loops) may be implemented via the task primitives and the queue and multiqueue data structures. Finally, we prove that one of the readers/writers implementations satisfies certain correctness criteria including freedom from deadlock and the mutual exclusion of readers and writers.
-
Ph.D. Thesis
1987
A Decision Procedure for a Class of Unquantified Formulae of Set Theory Involving the Powerset and Singleton Operators
Cantone, Domenico A.
Abstract
|
PDF
Title: A Decision Procedure for a Class of Unquantified Formulae of Set Theory Involving the Powerset and Singleton Operators
Candidate: Cantone, Domenico A.
Advisor(s): Schwartz, Jacob T.
Abstract:
The class of unquantified formulae of set theory involving Boolean opeators, the powerset and the singleton operators, and the equality and membership predicates is shown to have a solvable satisfiability problem. It is also shown that whenever a formula (phi) in the above class is satisfiable there exists a hereditarily finite model of (phi), where rank is bounded by a doubly exponential expression in the number of variables occurring in (phi).
-
Ph.D. Thesis
1987
Tape Reversal and Parallel Time
Chen, Jianer
Abstract
|
PDF
Title: Tape Reversal and Parallel Time
Candidate: Chen, Jianer
Advisor(s): Yap, Chee; Gross, Jonathan
Abstract:
Recent research has shown an intimate relationship between reversal complexity on multitape Turing machines and parallel computation time. In this dissertation, we systematically study the structural properties of these two important complexity measures and the relationship between them. We develop some basic techniques necessary for establishing analogues of well-known theorems on space and time complexity. We give a linear simulation of deterministic space by deterministic reversal on multitape Turing machines and the first known tape reduction theorem for reversal complexity. As applications of the tape reduction theorem, we prove a hierarchy theorem and show the existence of complete languages for reversal complexity. The relationship between reversal and tape is also discussed. We show that with respect to reversal complexity there is an intrinsic difference between 1-tape and 2-tape Turing machines. More precisely, we show that in deterministic case, 2-tape Turing machines can simulate k-tape Turing machines with only a polynomial (quadratic) increase of reversals while 1-tape Turing machines do not have such a property if $P \not= PSPACE;$ in nondeterministic case, reversal complexity is too powerful to be a complexity measure on 2-tape Turing machines but on 1-tape Turing machines it is a reasonable complexity measure which is linearly related to the space complexity. For parallel computation, we introduce the concepts of deterministic, nondeterministic and oracle circuits in a very natural way. Based on our model of oracle circuits, we build up a log-depth hierarchy in parallel computation, and show that our hierarchy corresponds exactly to the well-known NC hierarchy. From this point of view, some structural properties of the NC hierarchy are discussed. Log-depth many-one reducibility and log-depth Turing reducibility are discussed. Several new complete languages for the class of deterministic log-space languages are presented. Finally, we give the detail proofs of the polynomial relationship between reversal complexity on multitape Turing machines and parallel time complexity on uniform circuits. (Some of these proofs have been outlined by Pippenger.)
-
Ph.D. Thesis
1987
The use of Data Flow Information for the Selection and Evaluation of Software Test Data
Frankl, Phyllis G.
Abstract
|
PDF
Title: The use of Data Flow Information for the Selection and Evaluation of Software Test Data
Candidate: Frankl, Phyllis G.
Advisor(s): Weyuker, Elaine
Abstract:
Two families of software test data adequacy criteria, each based on data flow analysis, are defined for programs written in Pascal. Their formal properties are investigated and interactive software testing tools based on them are described. The first of these families, the data flow testing criteria, was previously defined for programs written in a simple language. We extend the definitions to apply to programs written in Pascal. The data flow testing criteria are based purely on the syntax of the program being tested. They require that the test data execute certain paths from program points at which variables are defined to program points at which those definitions are used. We describe the design and implementation of a software testing tool, ASSET, based on the data flow testing criteria. A serious weakness of the data flow testing criteria is that for some programs there exists no set of test data which is adequate for testing the program according to these criteria. This problem arises due to unexecutable paths in the program. The second family of criteria, the feasible data flow testing criteria, circumvent this problem by eliminating from consideration those definition-use associations which can never be exercised. We show that certain formal properties of the feasible data flow testing criteria differ significantly from those of the data flow testing criteria. Since it is undecidable whether a given set of test data satisfies a given feasible data flow testing criterion, feasible data flow testing cannot be fully automated. However, it can be partially automated. We describe a heuristic method, the path expression method, which attempts to determine whether a given definition-use association can be exercised. The path expression method is based on a combination of data flow analysis and symbolic evaluation. We introduce a new symbolic evaluation technique which is more general, but essentially no more expensive, than symbolic execution. The path expression method, along with ASSET, constitute a tool which partially automates feasible data flow testing.
-
Ph.D. Thesis
1987
Control and Task Planning for a Four Finger Dextrous Manipulator
Hor, Maw-Kae
Abstract
|
PDF
Title: Control and Task Planning for a Four Finger Dextrous Manipulator
Candidate: Hor, Maw-Kae
Abstract:
Various attempts have been made to build a dextrous hand and to study the control and planning issues involved in dextrous manipulation. However, in many practical situations, the following problems make the real time control and planning of dextrous manipulation very difficult: (1) the discrepancy between the model and reality (for example, imprecise knowledge of inertia, friction, and the geometric dimensions), (2) the inadequacy of the control theory used in controlling a highly non-linear manipulator, (3) the numerous computations required in the dynamic and kinematic calculations, and (4) the lack of abstract level manipulation primitives. This thesis investigates several issues in relation to dextrous manipulation and control. We designed and built a planar manipulator, the Four Finger Manipulator, for studying of the dextrous manipulation. We also developed a prototype software structure for multi-finger manipulators. Models for quasi-static control and real time calculation are presented which make the real time control possible. Heuristics are described for: (a) choosing the finger gripping forces of a force controlled adaptive frictional grasp, (b) estimating the trajectory in compliant motions, and (c) coordinating finger groups to perform tasks that require multiple finger groups. A set of manipulation primitives and algorithms have been developed on the Four Finger Manipulator. Successful performance is demonstrated for various tasks.
-
Ph.D. Thesis
1987
An Analyzer for the Information Content of Sentences (Semantics)
Johnson, Stephen Bennett
Abstract
|
PDF
Title: An Analyzer for the Information Content of Sentences (Semantics)
Candidate: Johnson, Stephen Bennett
Advisor(s): Sager, Naomi
Abstract:
An algorithm is presented which produces a representation of the information content of sentences as a tree of operator words predicating on argument words. The Sentence Analyzer employs a new type of formal grammar which describes the surface syntax of sentences, grammatical constraints, and the operator-argument relations underlying the surface forms. The algorithm works left to right, first obtaining the operator-argument representations of words from a lexicon, and then applying grammar rules to construct operator-argument subtrees over longer and longer segments of the sentence. All alternate analyses are developed simultaneously. The grammar rules are based on the detailed mathematical grammar of Zellig Harris, termed here Composition-Reduction Grammar, in which sentences are generated by a process of operator words entering on argument words. As words enter, this tree structure is linearized. Various reductions may apply to words which are redundant in the operator-argument structure, producing variations such as morphological changes, and the dropping of words from the sentence. Reduction yields sentences with a more compact form, the form we see, while preserving the objective information content. The fundamental unit of the formal grammar developed here is the descriptor, a tuple of six attributes, which represents an operator-argument word class. A descriptor is similar to traditional word classes like nouns and verbs, but can carry information specific to an individual word to form an entry in the lexicon. More importantly, descriptors can replace the use of symbols for phrases in traditional grammar. This is because a descriptor can stand for the entire word sequence spanned by the operator-argument subtree of which it is the root. This feature enables the grammar rules to be specified as a relation between two descriptors whose subtrees span adjacent word sequences. The two words related by a rule either have a simple operator-argument relation, or a more complex operator-argument relation made compact by reduction. The result is a formal grammar in which all relations are between words, with sufficient power for the Sentence Analyzer to perform a direct analysis of sentences into their informational relations, without recourse to intricate transformational procedures.
-
Ph.D. Thesis
1987
Description of Shape using Orientation and Propagation Flow
Menczel, Yaron
Abstract
|
PDF
Title: Description of Shape using Orientation and Propagation Flow
Candidate: Menczel, Yaron
Abstract:
A new theory for the partition of an image into its syntactical primitives is introduced. The method uses edge segments and their orientation to mark an image with useful syntactical information. The marking is done by defining a flow initiating from the boundary and propagating inward into the shape. Three algorithms are introduced. The first sends flow waves in a direction perpendicular to the edges into the object. The second algorithm is an iterative version of the first algorithm, with the addition that an edge detector is constantly applied on the growing object. The third labels the edges with their orientation and then iteratively applies a majority vote selection to spread the orientation with unlabeled pixels inactive in the voting process. The propagation is moderated by a number of heuristics that ensure local and global support within the flow. The flow carries orientation data and spreads the information to all interior pixels. A connected component algorithm based on orientation is then used to construct segments of uniform orientation. These segments constitute the basis of a structural description. The new approach is compared to other methods of segmentation and representation of shapes. These other methods are not always capable of explaining human perception of shapes in a uniform and unique way. Methods that are designed to deal with simple perceptual domains are not capable of dealing with occlusion, texture, touching bodies, and subjective contours. In contrast, this new proposal is shown to work with simple figures as well as more real world complex images. Several examples are given to show the usefulness of the approach. In particular, we give an implementation of a system that performs automatic character recognition based on this method.
-
Ph.D. Thesis
1987
Generic: a Programming Language for Vlsi Layout and Layout Manipulation
Solworth, Jon A.
Abstract
|
PDF
Title: Generic: a Programming Language for Vlsi Layout and Layout Manipulation
Candidate: Solworth, Jon A.
Abstract:
We describe a programming language, GENERIC (GENERation of Integrated Circuits) for producing high-quality, general-purpose layout of custom integrated circuits. Unlike other VLSI programming languages, in GENERIC, existing layouts can be manipulated by the VLSI operators to produce new layouts. The design of a layout in GENERIC starts with a circuit description which contains the active components and electrical nets. The circuit description (sometimes called an abstract layout) is then transformed into a realizable layout by the application of VLSI operators. These operators are both design-rule safe and wire connectivity maintaining. Built-in operations include relative placement, primitive compaction, and orientation. A novel mechanism called planes is described, which for the first time enables non-design rule violating topological manipulations. GENERIC forms the kernal of a VLSI design system. We also describe the cell library, Flexcell which contains parameterized and modifiable cells. Cells in the Flexcell library are created using cell generators, but unlike traditional cell generators, the layout generated need not exhibit a high degree of regularity. For each cell, a number of templates are provided, which encode known good layout schemes. Cells created with a template can then be modified using utilities written in GENERIC. Hence, Flexcell provides highly optimized cells which can be reused in many different environments.
-
Ph.D. Thesis
1987
A Theory of Concurrent Programs and Test Data Adequacy
Weiss, Stewart Neil
Abstract
|
PDF
Title: A Theory of Concurrent Programs and Test Data Adequacy
Candidate: Weiss, Stewart Neil
Abstract:
We establish a general framework for the investigation of concurrent program-based adequacy criteria and we extend notions of program-based test data adequacy to the domain of concurrent programs. This work is consistent with the testing theory proposed by Gourlay and the axiomatization of test data adequacy proposed by Weyuker. Our method is to define a representation of concurrent programs which is particularly suited to the study of the problems of concurrent program testing, and which serves as a model for an extension of a theory of testing to such programs. Our framework also provides the basis for a practical testing tool for concurrent programs. We prove theoretical results concerning various properties of our representation of concurrent programs, among which are notions of completeness, consistency, and computability. We propose approximate solutions to some of the undecidable problems which we encounter. We demonstrate that our theory of concurrent program testing may be used to assess the complexity and reliability of various adequacy criteria for testing concurrent programs. We use our model to investigate and compare concurrent program based adequacy criteria derived from a subclass of structural coverage criteria including a large family of data flow criteria. Finally, we propose practical methods of using our framework as an aid to concurrent program testing.
-
Ph.D. Thesis
1986
Three-Dimensional Data Acquisition by Means of the Intensity Ratio Depth Sensor (Vision, Robotics)
Carrihill, Brian Lee
Abstract
|
PDF
Title: Three-Dimensional Data Acquisition by Means of the Intensity Ratio Depth Sensor (Vision, Robotics)
Candidate: Carrihill, Brian Lee
Abstract:
The thesis discusses the acquisition of three-dimensional information by means of the Intensity Ratio Depth Sensor. The Intensity Ratio Depth Sensor uses a structured-light triangulation approach for the measurement of depth from a camera unit to object surfaces in a scene. The device may be viewed as a modification of the plane-of-light scheme in which multiple illumination planes are encoded by intensity ratio values obtained from two or three intensity images. The modification avoids the need to scan the plane of light which, together with the small amount of processing required for the depth calculation, offers a distinct speed advantage over existing schemes. The system design and calibration issues, necessary in obtaining a working Intensity Ratio Depth Sensor, are analyzed. The depth equation (for the transformation of intensity ratio values into depth values) together with four experimental methods for its calculation are presented. The results of the four sensor implementations are given for test scenes. Potential scene dependent and scene independent error sources are discussed. In particular, mutual illumination (illumination resulting from reflections between surfaces elements) is an important scene dependent error source. An analysis of mutual illumination based on a radiative energy transfer formulation is presented. The result of the analysis is an iterative mutual illumination removal algorithm which is applied to test scenes. Two empirical methods for mutual illumination removal are also derived and demonstrated. Preliminary processing of the three-dimensional data produced by the sensor, exploiting constraints imposed by the device, is examined. The processing yields first and second derivative surface parameters for points in the scene.
-
Ph.D. Thesis
1986
Polygon Optimization Problems (Computational Geometry, Algorithm)
Chang, Jyun-Sheng
Abstract
|
PDF
Title: Polygon Optimization Problems (Computational Geometry, Algorithm)
Candidate: Chang, Jyun-Sheng
Advisor(s): Yap, Chee
Abstract:
The thesis examines polygon optimization problems arising from the stockcutting problem. Two types of problems are considered: the inclusion problems and the enclosure problems. The inclusion (enclosure) problems ask for a maximum polygonal subset (minimum polygonal superset) of a given polygon, satisfying certain conditions. Both the area and perimeter metrics on the polygons can be used as the measure of optimality. Various geometric properties and algorithms for these problems are shown. The main results are: (1) An O(n('7)) time (O(n('6)) time) algorithm for finding a maximum area (perimeter) convex subset. (Only exponential time algorithms existed previously for the problem.) (2) An O(n('2) log n log k) time algorithm for finding a minimum area enclosing convex k-gon. (3) An O(n('2)) time algorithm for finding a minimum perimeter enclosing triangle. (4) An O(nk('4)) time algorithm for finding a minimum enclosing k-gon with a fixed shape.
-
Ph.D. Thesis
1986
Machine Code Optimization
Goss, Clinton Francis
Abstract
|
PDF
Title: Machine Code Optimization
Candidate: Goss, Clinton Francis
Abstract:
This dissertation explores classes of compiler optimization techniques which are applicable late in the compilation process, after all executable code for a program has been linked. We concentrate on techniques which, for various reasons, cannot be applied earlier in the compilation process. We begin by demonstrating the need for optimizations at this level in the UNIX('(REGTM)) programming environment. We then describe a Machine Code Optimizer which improves code in executable task files in that environment. The specific details of certain algorithms are then described: code elimination to remove unreachable code, code distribution to re-order sections of code, operand reduction which converts operands to use more advantageous addressing modes available on the target architecture, and macro compression which collapses common sequences of instructions. We show that the problem of finding optimal solutions for code distribution is NP-Complete and discuss heuristics for practical solutions. We then describe the implementation of a Machine Code Optimizer containing the code elimination, code distribution, and operand reduction algorithms. This optimizer operates in a production environment and incorporates a machine independent architecture representation which allows it to be ported across a large class of machines. We demonstrate the portability of the Machine Code Optimizer to the Motorola MC68000('(REGTM)) and the Digital VAX-11('(REGTM)) instruction sets. Finally, metrics on the improvements obtained across architectures and the optimization techniques are provided along with proposed lines of further research. The methods demonstrate that substantial reductions in code space and more modest improvements in execution speed can be obtained using these techniques.
-
Ph.D. Thesis
1986
Sequential Quadratic Programming Methods Based on Approximating a Projected Hessian Matrix (Updating Method, Quasi-Newton, Nonlinear Constraints)
Gurwitz, Chaya Bleich
Abstract
|
PDF
Title: Sequential Quadratic Programming Methods Based on Approximating a Projected Hessian Matrix (Updating Method, Quasi-Newton, Nonlinear Constraints)
Candidate: Gurwitz, Chaya Bleich
Advisor(s): Overton, Michael
Abstract:
We consider the nonlinear programming problem, namely minimizing a nonlinear function subject to a set of nonlinear equality and inequality constaints. Sequential quadratic programming (SQP) methods are particularly effective for solving problems of this nature. It is assumed that first derivatives of the objective and constraint functions are available, but that second derivatives may be too expensive to compute. Instead, the methods typically update a suitable matrix which approximates second derivative information at each iteration. We are interested in developing SQP methods which maintain an approximation to second derivative information projected onto the tangent space of the constraints. The main motivation for our work is that only the projected matrix enters into the optimality conditions for the nonlinear problem. Updating projected second derivative information reduces the dimension of the matrix to be recurred; we avoid the necessity of introducing an augmenting term which can lead to ill-conditioned matrices; and we are able to make use of standard quasi-Newton updates which maintain hereditary positive definiteness. We discuss four possible formulations of the quadratic programming subproblem and present numerical results which indicate that our methods may be useful in practice.
-
Ph.D. Thesis
1986
Analysis of Cache Memories in Highly Parallel Systems
Mcauliffe, Kevin Patrick
Abstract
|
PDF
Title: Analysis of Cache Memories in Highly Parallel Systems
Candidate: Mcauliffe, Kevin Patrick
Advisor(s): Gottlieb, Allan
Abstract:
Though advances in VLSI technology will soon make it practical to construct parallel processors consisting of thousands of processing elements (PEs) sharing a central memory, the performance of these parallel processors is limited by the high memory access time due to interconnect network latency. This thesis is a study of how the performance of a parallel processor is affected by associating a cache memory with each PE of the system. Cache parameters and policies are varied and the performance of the resulting cache configurations are compared. The cache coherence problem is discussed and a solution that is compatible with the philosophy of parallel systems is adopted. Performance is analyzed by analytic and simulation models. Due to time and space limitations the simulation modeling is done in a hierarchical fashion: a primary level simulates a single cache and a secondary level simulates a parallel machine. The simulators can run in a trace-driven and self-driven mode. The trace data used to drive the simulators was collected by tracing the reference patterns of actual parallel programs. An approximate analytic model is developed that predicts the queue waiting times of various components of a parallel system, enabling the comparison of a water range of cache parameters than is possible with the simulators.
-
Ph.D. Thesis
1986
Synthesizing Realistic Textures by the Composition of Perceptually Motivated Functions (Graphics)
Perlin, Kenneth H.
Abstract
|
PDF
Title: Synthesizing Realistic Textures by the Composition of Perceptually Motivated Functions (Graphics)
Candidate: Perlin, Kenneth H.
Advisor(s): Lowe, David
Abstract:
This research demonstrates a uniform functional composition framework for modeling and synthesizing complex textures. The appearance of a wide range of natural phenomena can be expressed and efficiently synthesized in this framework. Animation of texture is readily incorporated. Emphasis will be on explaining the properties leading to generality, expressivity, and efficiency. A system is described in which an image is approximated by a finite collection of samples, representing neighborhoods in the image. The user designs visual simulations of surface textures by constructing an algorithm that is to be independently computed at each image sample. Primitive functions are provided that allow control within the texture algorithm of visually important texture properties, such as frequency and first order spatial statistics. The user proceeds by building from these functions. Feedback is provided by images indicating the state of any computed quantity over all samples. The system includes primitive functions allowing the manipulation of such visually discriminable qualities as brightness, contrast, coherent discontinuities, orientation, and features possessing restricted ranges of frequency. These are used to build up composite functions allowing the manipulation of more sophisticated visual qualities. The system is applied to build the appearance of many textures such as water, star fields, flame, smoke, marble, clouds, stucco, rock, smoke, and soap films. Major results are twofold. First, it will be shown that a wide range of naturalistic visual textures can be constructed with this approach. Second, a number of particular functions will be demonstrated that encode the common visual elements of disparate visual textures.
-
Ph.D. Thesis
1986
Persistent Data Structures
Sarnak, Neil Ivor
Abstract
|
PDF
Title: Persistent Data Structures
Candidate: Sarnak, Neil Ivor
Advisor(s): Tarjan, Robert
Abstract:
This dissertation introduces the concept of persistence in data structures. Classical algorithms operate on data structures is such a manner that modifications to the structure do not preserve its state as it appeared before the modification. A persistent data structure is one in which multiple versions of the structure as it varies through time are maintained. Data structures that do not maintain the history of states of the structure are called ephemeral. A differentiation between two types of persistence, partial persistence and full persistence, is made. A partially persistent data structure allows the modification only of the most recent version of the structure. This makes partial persistence useful in cases where the history of update operations is required for query purposes but no changes of prior versions are desired. Under certain constraints, any ephemeral data structure may be made persistent without a major blow-up of the space and time complexity measures. Full persistence allows modification of any version of the data structure. This dissertation presents algorithms that support persistent search trees, with applications in computational geometry. In particular, the planar point location problem will be solved using persistent binary search trees with an O(log n) query time and O(n) space. Persistent lists are described, with applications in applicative programming languages. In particular, persistent deques are presented that have constant space overhead per deque operation, while still maintaining O(1) update times. Persistent finger search trees are also presented, with applications in text editing. Persistent finger search trees are implemented with an O(log d) space overhead per update, and an O(log d) time bound, where d is the distance between the finger and the affected position. A general result is shown that allows making arbitrary ephemeral data structures partially persistent with an O(1) space overhead per update operation.
-
Ph.D. Thesis
1986
The Semantics of Shared Variables in Parallel Programming Languages
Shulman, Norman Victor
Abstract
|
PDF
Title: The Semantics of Shared Variables in Parallel Programming Languages
Candidate: Shulman, Norman Victor
Abstract:
Chapter 1 surveys the status of shared variables in parallel programming languages, as well as pointing out the problems inherent in the use of shared variables and the importance of a semantic definition. Our approach to the semantics of shared variables is set forth, and used to highlight the deficiencies of shared variables in Ada. Chapter 2 presents a clear simple informal semantic model of shared variables based on the concepts of atomicity, uniqueness and independence. The model captures the relationships between these concepts so that it can be used to resolve questions regarding packing, mutual exclusion, and local copies of shared variables. Chapter 3 discusses the deficiencies of shared variables in Ada. An informal semantic model of shared variables in Ada is presented in terms of the concepts of atomicity, uniqueness and independence. This informal semantic model serves as the basis for proposing changes to the section of the Ada Reference Manual dealing with shared variables for incorporation in a future revision. Chapter 4 shows how the Ada definition can be modified so that execution of programs such as the on-the-fly garbage collector and the Laplace's equation solver mentioned in Chapter 1 will no longer be qualified as erroneous. New restrictions can be imposed to ensure the independence of operations on shared variables. The informal semantic model also serves as the basis for extending the applicability of the axiomatic techniques of Owicki to a wider class of programs subject to certain optimizations of time and space. Chapter 5 shows that it is possible to relax the restrictions on expressions, and to formulate conditions under which it is safe to keep local copies of shared variables and to pack shared structured objects, while preserving the assignment axiom.
-
Ph.D. Thesis
1986
Recursive Data Types in Setl: Automatic Determination, Data Language Description, and Efficient Implementation (Compilers)
Weiss, Gerald
Abstract
|
PDF
Title: Recursive Data Types in Setl: Automatic Determination, Data Language Description, and Efficient Implementation (Compilers)
Candidate: Weiss, Gerald
Abstract:
Very high level languages are often weakly typed in the sense that different occurrences of a name can be associated with distinct types. The types of many entities are nevertheless determinable from the structure of the program, allowing translators for these languages often to incorporate some sort of typefinding algorithm. Due to problems of algorithmic termination, however, these algorithms have been unable to type structures of a recursive nature such as trees. In this thesis we present a method which detects and uncovers the structure of recursive objects, and discuss possible applications of the method to optimization of code. We examine the run-time type model of SETL and the corresponding data representation sublanguage (DRSL), and present a general critique of the design as well as implementation of the current data representation sublanguage. The objects expressible by the latter are shown to be proper subsets of the universe of types assumable by SETL entities at run-time; we present suggestions for extending the representation sublanguage to allow for complete type specification.
-
Ph.D. Thesis
1985
Extraction and Generalization of Expert Advice (Learning, Representation, Induction)
Benjamin, David Paul
Abstract
|
PDF
Title: Extraction and Generalization of Expert Advice (Learning, Representation, Induction)
Candidate: Benjamin, David Paul
Abstract:
This work describes a method for representing knowledge in production systems which makes use of the conflict set. This permits a rich description of task situations, and allows the use of control productions to effect conflict resolution. A set of extensions to the OPS5 production system is described which facilitates the implementation of this approach within OPS5. This extended system is then used to implement a multi-level, goal-directed production system for the construction of expert systems, CAMERA, in which control information is automatically built from the actions of an expert trainer. This control information consists of sequencing and goal information which is interactively extracted from the trainer by CAMERA, and generalized by DISC, which models generalization as the process of finding 'discriminating' features, which are those features of a situation that cause a particular method to be chosen, and then constructing a description of those features. When solving a task, CAMERA examines only the discriminating features specified in the generalized control rules. Thus, instead of matching all the productions against the working memory, CAMERA considers only the relevant rules. Experiments with the system are described.
-
Ph.D. Thesis
1984
On the use of Global Optimization Algorithms for the Detection of Semantic Programming Errors (Setl, Data Flow, Type Finding)
Freudenberger, Stefan M.
Abstract
|
PDF
Title: On the use of Global Optimization Algorithms for the Detection of Semantic Programming Errors (Setl, Data Flow, Type Finding)
Candidate: Freudenberger, Stefan M.
Abstract:
It has been pointed out repeatedly that it should be possible to adapt global program optimization algorithms for the purpose of detecting faults in programs. It has become clear that global program analysis can be beneficial in program development, debugging, verification, and documentation since it can provide information about all possible executions of the code at once. The techniques employed are not only capable of revealing errors, interfacing errors, and other shortcomings but of doing so in a way which helps to pinpoint the source of problems. In this dissertation we systematically examine the global optimization techniques available today to determine how these techniques can be used to aid the rapid, compile-time detection of program errors. The techniques considered include flow tracing, type finding, and value flow. The approach is to determine what facts about a program can be collected using the best available program analysis technique, and to use this information to mark suspicious program segments. The techniques proposed have been implemented in an extensive global bug finder, and examples of its use are included.
-
Ph.D. Thesis
1984
Description of Operating Systems using Very-High-Level Diction (Programming Languages)
Leshem, Gavriel
Abstract
|
PDF
Title: Description of Operating Systems using Very-High-Level Diction (Programming Languages)
Candidate: Leshem, Gavriel
Abstract:
Operating systems are generally large and complicated, and therefore difficult to write, debug and maintain. This thesis approaches the problem of simplifying these complex descriptions by writing operating system prototypes using a very high-level programming language that significantly relieves the burden of low-level and machine dependent details. The language used includes special constructs designed to facilitate clear and concise description of the mechanisms involved in multiprocessing systems: (1) a coroutine mechanism to implement concurrent processes, (2) a interprocess communication mechanism, (3) a real I/O facility that provides access to I/O system services. Using these intermediate-level constructs simplifies the problem of describing the high-level structure of operating systems significantly. These constructs are written in a high-level programming language, using several simple low-level primitives. They can be modified easily and new operations can be added at will. The main purpose of our high-level approach is to provide a tool for describing and designing operating systems. The high-level description can be used as a blueprint for writing the real operating system in a suitable lower-level implementation language. The thesis also describes an implementation of the suggested language that can be used to test the high-level description of an operating system and possibly also to simulate the real system to predict its potential performance. We test our descriptive tools by giving extended descriptions of two well-known operating systems using the proposed high-level language. Several basic design issues concerning these operating systems are then examined and the operating systems are compared in a manner that emphasizes the design issues that emerge. Some modifications of these systems, inspired by the high-level representation, are suggested.
-
Ph.D. Thesis
1984
Decidability and Proof Procedures for Set Theory with a Choice Operator
Omodeo, Eugenio Giovanni
Abstract
|
PDF
Title: Decidability and Proof Procedures for Set Theory with a Choice Operator
Candidate: Omodeo, Eugenio Giovanni
Advisor(s): Davis, Martin D.
Abstract:
Various decision algorithms are described and proved correct, each applying to a particular collection of unquantified set-theoretical formulas. Some of these algorithms are able to determine whether each given formula is satisfiable, some others can only establish whether it is satisfiable by means of an interpretation in which the values of the terms appearing in the formula are finite sets. In most cases, formulas are allowed to involve a choice operator which selects from every non-empty set s the minimum of s with respect to a well-ordering of the class of all sets. A semi-decision procedure is also described which applies to unquantified formulas in which all familiar set-theoretical operators are allowed to appear, with certain limitations imposed only on the occurrences of the unionset and choice operators. The execution of this procedure only terminates when the input formula is finitely satisfiable.
-
Ph.D. Thesis
1984
A Self-Organizing Database System - a Different Approach to Query Optimization
Piatetsky-Shapiro, Gregory Ilya
Abstract
|
PDF
Title: A Self-Organizing Database System - a Different Approach to Query Optimization
Candidate: Piatetsky-Shapiro, Gregory Ilya
Abstract:
A Self-Organizing Database System (SODS) monitors queries asked, finds a good (or optimal) database structure for those queries, and suggests or does the reorganization. In this thesis we describe a prototype SODS for single-file relational queries and give an integrated analysis of its major design problems: (1) estimation of the number of records satisfying a condition (i.e., condition selectivity); (2) query optimization; (3) storing information about a set of queries; (4) optimal selection of secondary indices. We present new results for each of those problems. Some of this research was implemented in FASTSCAN, a commercial query system. We present a new method for accurate estimation of the number of records satisfying a condition field rel constant, where rel is one of =, , (LESSTHEQ), (GREATERTHEQ). We also examine estimates for more complicated conditions. We present elementary operations (such as UNION, INTERSECT) on pointer and record streams. We show how to use the query parse tree to construct a query evaluation method (EM) from those operations. Then we give an algorithm for selecting the optimal EM, based on converting the query to conjunctive normal form. We examine ways to compress information about a set of queries by combining information for similar queries. We derive a compression scheme which allows a correct and fast computation of the cost of the average query under any index set. We combine all previous results in analyzing the NP-hard problem of optimal index selection. We present two algorithms for it. The first one always finds the optimal answer and runs fast on real-size problems despite its exponential worst-case complexity. The second one (a Greedy method) runs much faster, yet finds the optimal answer very frequently. We analyze the Maximum Cover problem (also NP-hard), a simplification of the optimal index selection. We prove that the Greedy method is an epsilon-approximate algorithm: its answer is always > 63% of the optimal answer.
-
Ph.D. Thesis
1984
Concurrency Control using Locks in Distributed Databases
Wolfson, Ouri
Abstract
|
PDF
Title: Concurrency Control using Locks in Distributed Databases
Candidate: Wolfson, Ouri
Abstract:
Distributed Databases have drawn a great deal of research interest recently because of a combination of several related reasons. First is the tremendous expansion in the quantity of data that has to be processed in the modern world. Second is the growth in the number of interelated processing centers because microcomputers and communication technology enable greater dispersion of organizations. Third is the realization that complex problems to be addressed in this and next decade, such as different aspects of Artificial Intelligence, will require at least some parallel processing for adequate solution. In Distributed Databases the typical problems of Centralized Databases become more difficult. One of them is Concurrency Control. It can be summarized as follows. Users of the Database access it by executing transactions. Different transactions are executed concurrently therefore their actions interleave. Without proper control this interleaving may produce incorrect results, even if individual transactions are correct. The Concurrency Control process has to prevent these situations. There are several possible mechanisms for controlling concurrency, of which the most widely used is Locking. In this thesis we examine and analyze Locking as a Concurrency Control mechanism for Distributed Databases. We define Distributed Locking Policies (methods for locking entities in Distributed Databases) and show how existing Policies for a Centralized Database generalize to the Distributed case. We also define a new category of Distributed Locking Policies, D-policies, into which these generalizations fall. An algorithm which determines whether all transactions of a given D-policy are guaranteed to produce only correct interleavings (are safe) is presented. The algorithm is efficient, even though testing an arbitrary set of transactions for safety is coNP-complete. However, we prove that optimal locking of transactions to satisfy the conditions tested by the algorithm is NP-hard even for a Centralized Database.
-
Ph.D. Thesis
1983
A Practical Method for Lr and Ll Syntactic Error Diagnosis and Recovery
Burke, Michael George
Abstract
|
PDF
Title: A Practical Method for Lr and Ll Syntactic Error Diagnosis and Recovery
Candidate: Burke, Michael George
Abstract:
A powerful, practical, and language-independent method for diagnosing and recovering from syntactic errors within the LR and LL parsing frameworks is described. The method proceeds in three phases. The simple recovery phase attempts a single token modification of the source text, scope recovery attempts a multiple token insertion to close one or more open scopes, and secondary recovery involves a multiple deletion of tokens surrounding the error point. When the token at which the error is detected is not the token that is in error, points on the parse stack must be considered if the error is to be corrected. Condensation that has occurred on the parse stack, however, is sometimes harmful in this context. Also, in some of the parsing frameworks under consideration, unwanted condensation may occur even if the error is detected at the point at which it occurs. This problem motivates the existence of four versions of the method involving tradeoffs between the quality of error recovery and efficiency with respect to space and time. Techniques are described that make the method efficient in practice. Other implementation issues, such as language specific tuning and the issuing of diagnostic messages, are discussed. Empirical results are presented that demonstrate that the versions of the method offer choices ranging from very high quality recovery with reasonable efficiency to high quality recovery with excellent efficiency.
-
Ph.D. Thesis
1983
Resolution by Unification and Equality
Digricoli, Vincent Joseph
Abstract
|
PDF
Title: Resolution by Unification and Equality
Candidate: Digricoli, Vincent Joseph
Abstract:
In resolution by unification and equality, we recast the theory of binary resolution on the basis of the properties of the equality relationship as stated by the equality axioms. In standard binary resolution as introduced by J. A. Robinson in 1965, complete and strict unification is the sole basis for resolving complementary literals leading to exceptionally long proofs for even simple theorems involving equality. In RUE resolution implicit use of the equality axioms is made through their incorporation into two rules of inference which are sound and complete to prove E-unsatisfiability. Proofs by RUE resolution are significantly shorter and more transparent than standard refutations with the equality axioms. These qualities permit more effective application of heuristics to guide the search for refutations. We here present the complete theory of RUE resolution, with proofs of lemmas and theorems in support of the theory. We define RUE hyperresolution as a restriction strategy and develop a heuristic theory to order the search for refutations. We have implemented an RUE theorem prover and performed experiments in the fields of Boolean algebra, Ring theory and Group theory. We present a careful comparison with the work of McCharen, Overbeek and Wos, whose theorem prover using unification resolution with the equality axioms and paramodulation represents one of the most successful uses of unification resolution. The comparison of results presents major evidence that RUE resolution is a significant advance over unification resolution.
-
Ph.D. Thesis
1983
Measuring Setl Performance
Shields, Lynwood David
Abstract
|
PDF
Title: Measuring Setl Performance
Candidate: Shields, Lynwood David
Advisor(s): Schwartz, Jacob T.
Abstract:
Current computer technology is being driven by the hardware advances that have provided a constant and dramatic decrease in the cost of elementary hardware operations. This has made more feasible the use of high-level languages that permit program development without the constant attention to detail needed to achieve efficient execution that characterizes low-level languages; indeed, such languages can be realized by a combination of microcode and special-purpose VLSI chips. However, effective use of this technology requires an understanding of the underlying performance issues. We have analyzed the problem of measuring performance of high-level languages by studying in detail one such language, SETL, and have developed a set of measurement tools addressed both to the user and the implementor. Our thesis is that such measurement efforts must aim to provide measurement tools that can be integrated into the system, but only after their efficacy has been demonstrated by their use on real programs. This work has resulted in prototype versions of four program profilers, each providing a specific view of SETL performance; we discuss their use in analyzing, and then improving, the performance of actual SETL programs. We also discuss the implementation of the hard code system that provides an essential starting point for evaluating the effectiveness of the representation sublanguage provided by SETL. Finally, we indicate some ways in which SETL performance can be improved.
-
Ph.D. Thesis
1983
Undecidable Complexity Statements in a Hierarchy of Extensions of Primitive Recursive Arithmetic
Sigal, Ron Mark
Abstract
|
PDF
Title: Undecidable Complexity Statements in a Hierarchy of Extensions of Primitive Recursive Arithmetic
Candidate: Sigal, Ron Mark
Advisor(s): Weyuker, Elaine; Davis, Martin D.
Abstract:
For each transfinite ordinal (alpha) (LESSTHEQ) (epsilon)(,0), we fix a unique well-ordering of the natural numbers which we call its canonical well-ordering. Let S((alpha)) be Primitive Recursive Arithmetic plus function definition by transfinite recursion on the canonical well-ordering of order type (alpha). For a hierarchy of theories S((alpha)), where (omega)('(omega)('(omega))) (LESSTHEQ) (alpha) < (epsilon)(,0), we define functions (phi)(,(alpha)) such that statements asserting extremely loose upper bounds on the computational complexity of (phi)(,(alpha)) are independent of S((alpha)). We quantify the gap between actual and provable complexity bounds in terms of the Lob-Wainer hierarchy of rapidly growing functions. A statement asserting a primitive recursive upper bound for the complexity of (phi)(,(alpha)) can be proven in a theory slightly higher in the hierarchy than S((alpha)).
-
Ph.D. Thesis
1983
Formal Languages with Oracles
Weixelbaum, Elia S.
Abstract
|
PDF
Title: Formal Languages with Oracles
Candidate: Weixelbaum, Elia S.
Abstract:
A relativization of formal language theory is studied in this dissertation. Specifically, we examine possible relativizations of the four language classes of the Chomsky hierarchy. Definitions are given for oracle finite automata, oracle pushdown automata, oracle linear bounded automata, and oracle Turing machines. The relativized regular languages are characterized via results derived from AFL (abstract families of languages) theory. We then use this characterization to help us derive a relativization of the Chomsky-Schutzenberger theorem for relativized context free languages. We examine relativized recursively enumerable (r.e.) languages by studying oracle Turing machines and also by suggesting a definition for an oracle phrase structure grammar. We demonstrate two different types of equivalences between these two models. The context sensitive languages are relativized in the same manner as are the r.e. languages, although there are difficulties in proving the respective results for the context sensitive case. Several unresolved questions remain in this case.
-
Ph.D. Thesis
1982
Decision Algorithms for a Class of Set-Theoretic Formulae Involving One Occurrence of the Union-Set Operator
Breban, Michael
Abstract
|
PDF
Title: Decision Algorithms for a Class of Set-Theoretic Formulae Involving One Occurrence of the Union-Set Operator
Candidate: Breban, Michael
Advisor(s): Schwartz, Jacob T.
Abstract:
We consider the first order language allowing the operators = (equality), (epsilon) (membership), (UNION) (binary union), (INTERSECT) (binary intersection), (FDIAG) (set difference), { } (singleton former) and one occurrence of Un (unary union). We show that unquantified formulae of this language are decidable. As a preparatory result we show that unquantified formulae of the above mentioned language not involving the singleton former are decidable.
-
Ph.D. Thesis
1982
The Role of the High Level Specification in Programming by Transformation: Specification and Transformation by Parts
Merritt, Susan Mary
Abstract
|
PDF
Title: The Role of the High Level Specification in Programming by Transformation: Specification and Transformation by Parts
Candidate: Merritt, Susan Mary
Abstract:
Specification by parts is a technique for constructing a very high level specification of a problem. The specification is then the target of transformation by parts, a global transformation strategy, which yields a family of high level algorithms which are correct and which solve the problem. The specifications are easy to construct, to understand and to modify. The key to the specification by parts technique is the use of weak parts. Output conditions are factored into conjunctions of weaker conditions, called weak parts, each of which is easier to satisfy than the original condition. In the transformation by parts, an initial guess is made for the output object. The guess satisfies some subset of the weak parts; the conditions in this subset are called the invariant conditions. A general iterative structure is built, which incrementally changes the initial guess, keeping the invariant conditions true, and converging to the remaining conditions. The methodology demonstrates the relationship between invariance and convergence in algorithm construction. In particular it demonstrates that algorithms for the same problem are often the result of different choices of invariant and convergent conditions. The methods are illustrated in three case studies and in three supplementary examples (which are smaller in scope than the case studies), all of which are fundamental computer science problems. These applications demonstrate the flexibility and ease with which the high level specifications can be constructed and transformed. They also demonstrate the potential which this methodology offers for the discovery of new algorithms, the illustration of connections among known algorithms, and the possible semi-automation or automation of algorithm construction.
-
Ph.D. Thesis
1982
Software Structures for Ultraparallel Computing
Rudolph, Lawrence S.
Abstract
|
PDF
Title: Software Structures for Ultraparallel Computing
Candidate: Rudolph, Lawrence S.
Advisor(s): Gottlieb, Allan; Schwartz, Jacob T.
Abstract:
In this thesis we implement several basic parallel processing primitives by using a replace-add operation, which can supersede the standard test and set, and which appears to be a universal primitive for efficiently coordinating large numbers of independently acting sequential processors. The replace-add is essentially an indivisible add-to-memory operation although concurrent replace-adds can all be processed in the same one cycle. In particular, we use the replace-add to develop routines for concurrent access to a queue and show how they can be used to devise many highly parallel algorithms as well as a distributed, concurrent task scheduler. The paracomputer forms our underlying theoretical model of parallel computation although we also consider a realistic architecture approximating this model. We justify our use of the replace-add operation by presenting a hardware implementation that permits multiple replace-adds to be processed nearly as efficiently as loads and stores. Moreover, the crucial special case of concurrent replace-adds updating the same variable is handled particularly well: If every PE simultaneously addresses a replace-add at the same variable, all these requests are satisfied in the time required to process just one request.
-
Ph.D. Thesis
1981
Stochastic Solutions to the Schroedinger Equation for Fermions
Arnow, David Moss
Abstract
|
PDF
Title: Stochastic Solutions to the Schroedinger Equation for Fermions
Candidate: Arnow, David Moss
Abstract:
An exact stochastic method has been developed for generating the antisymmetric eigensolution of lowest index and its associated eigenvalue for the Schrodinger wave equation in 3N dimensions. The method is called the Green's function Monte Carlo method for fermions (FGFMC) because it is based on a Monte Carlo solution to the integral form of the Schrodinger equation (using Green's function) and because it is the fermion class of particles in physics which require antisymmetric solutions. The solution consists of two sets of 3N-dimensional points, {R(,j)('+)} and {R(,j)('-)}, distributed by density functions (psi)('+) and (psi)('-), whose difference, (psi)('+)-(psi)('-), is proportional to the eigensolution, (psi)(,F). These sets may be used to estimate integrals of the form (DIAGRAM, TABLE OR GRAPHIC OMITTED...PLEASE SEE DAI) where R = (x(,1),...,x(,3N)) and where f(R) and g(R) are antisymmetric functions. By setting g(R) to (psi)(,T)(R) and f(R) to H(psi)(,T)(R), where (psi)(,T) is an antisymmetric trial wave function satisfying the boundary conditions, E(,F) is obtained. The method is exact because the only sources of error are variance and bias, both of which can be estimated and reduced, either by employing larger sample sizes, or by reconstructing the sampling procedure in ways that make greater use of our understanding of the problem (importance sampling). There are no physical or mathematical approximations other than the statistical one. The crux of the method is a sampling procedure which constructs the two sets of points in linear time (as a function of accuracy). Earlier methods were exponential in cost. The FGFMC method is successfully applied to a one dimensional problem and a nine dimensional problem, the results of which are presented here. These results demonstrate that this method can be successfully applied to small physical problems on medium-scale computing machines. The key to this success was the transformation of the problem from exponential to linear cost as a function of accuracy. The strong dependence on dimensionality, however, currently results in an exponential cost as a function of problem size, and this, until overcome, imposes a servere barrier to calculations on large systems.
-
Ph.D. Thesis
1981
Synchronization Efficiency
Borg, Anita
Abstract
|
PDF
Title: Synchronization Efficiency
Candidate: Borg, Anita
Abstract:
A generally applicable methodology for the analysis of synchronization efficiency is introduced. It is based upon the assumption that synchronization is required because of the need to control the use of resources by concurrent processes. Two aspects of synchronization efficiency are identified: Time efficiency, and accuracy efficiency. Time efficiency provides a measure of the use of resources during synchronization. Accuracy efficiency specifies how well a solution to a synchronization problem supports the rules of a problem. The methodology involves the simulation of solutions to synchronization problems as greater and greater implementation detail is specified. The assumptions made concerning the execution times of operations, especially synchronization operations, is seen to be crucial to the correct analysis of synchronization efficiency. It is argued that the only reasonable assumption for the execution times of synchronization operations, when their implementation is left unspecified, is that they execute instantaneously. However, it is also shown that this assumption must be used with care in order to avoid erroneous conclusions. The methodology is applied to PV, Monitor, and ADA solutions to the mutual exclusion, reader-writer, and consumer-producer problems. The PV solutions were usually the most efficient, while the ADA solutions were found to be the least efficient. It is also shown that no single characteristic of a solution determines its efficiency. However, the primary characteristics affecting efficiency are shown to be: (1) The execution time required for synchronization. (2) The rules for execution of the synchronizating computations. (3) The amount of competition among processes. (4) The amount and cost of process switching required during synchronization. It is the interaction of these factors which determine synchronization efficiency.
-
Ph.D. Thesis
1981
Circle Graphs
Buckingham, Mark Alan
Abstract
|
PDF
Title: Circle Graphs
Candidate: Buckingham, Mark Alan
Advisor(s): Golumbic, Martin
Abstract:
From a circle with chords we may derive a graph whose nodes correspond to chords and whose edges correspond to intersecting chords. Such a graph is called a circle graph. After numbering the endpoints of the chords such that two endpoints are numbered the same iff the endpoints belong to the same chord, we form a circle graph sequence by reading off these numbers going around the outside of the circle. Circle graph sequences are often used to prove properties of circle graphs. In this dissertation we discuss many mathematical and algorithmic aspects of circle graphs. The number of different circle with chords representations that yield a chordless path is given. The property that a circle with chords is connected (that is, its derived circle graph is connected) and the property that a circle with chords has two separated chords (that is, two chords that cannot both be intersected by a third chord without the third intersecting a fourth chord) are described in terms of circle graph sequences. They are found to be dual to one another. An incomplete forbidden subgraph characterization of circle graphs is also presented. An important result of this dissertation is that the Berge Strong Perfect Graph Conjecture is shown to hold for the class of circle graphs. Many properties of p-critical graphs and partitionable graphs are given, most with simplified proofs. Some new results are presented and a new, very simple proof of the Berge Conjecture for K(,1,3)-free graphs is put forward. Very efficient algorithms for finding maximum (weighted) cliques and maximum (weighted) stable sets of the derived circle graph of a circle graph sequence are given. We find an O(e*log(,2)(omega)) algorithm for the unweighted clique problem, an O((delta)e) algorithm for the weighted clique problem and an O(c) algorithm for the weighted stable set problem; where e is the number of edges in the graph, the maximum clique size, (delta) the maximum degree and c the number of occurrences of an interval being completely contained in another interval in the circle graph sequence. Some open problems for further research are listed.
-
Ph.D. Thesis
1981
Decision Procedures for some Classes of Unquantified Set Theoretic Formulae
Ferro, Alfredo
Abstract
|
PDF
Title: Decision Procedures for some Classes of Unquantified Set Theoretic Formulae
Candidate: Ferro, Alfredo
Advisor(s): Schwartz, Jacob T.; Mammana, Carmelo
Abstract:
We consider the first order language consisting of = (equality), (ELEM) (membership), (UNION) (binary union), (INTERSECT) (binary intersection), (FDIAG) (set difference), and pow (powerset former). We show that the class of all universal sentences of this language is decidable, provided that we impose the strong restriction that at most two terms appear as arguments of the powerset former. As a preliminary result we show that the class of all universal sentences in the above language extended by allowing infinitely many constants: one for each hereditarily finite set, is decidable provided that we allow only a single occurrence of the powerset former.
-
Ph.D. Thesis
1981
A Transformational Framework for Automatic Derived Data Control and its Applications in an Entity-Relationship Data Model
Koenig, Shaye
Abstract
|
PDF
Title: A Transformational Framework for Automatic Derived Data Control and its Applications in an Entity-Relationship Data Model
Candidate: Koenig, Shaye
Abstract:
This thesis investigates the specification, implementation and application of derived data in the context of MADAM, an entity-relationship oriented, map-based data model/programming language for database conceptual schema representation and processing. The data representation and manipulation facilities of MADAM, described in chapter 2; represent a synthesis of ideas from the areas of very high level languages, in particular SETL, and the binary association and entity-relationship approaches to data modeling. Derived data refers to data that appears to exist in its declared form, but is actually derived from related data in the database. Previous approaches to the materialization of derived data have been based on a global recalculation strategy in which derived data is recomputed whenever it is referenced. In this thesis we present an alternative approach in which derived data is explicitly stored and incrementally maintained. In chapter 3, we describe the definition of derived data in MADAM; discuss its importance as a means of fostering logical data independence, providing access control mechanisms, and supporting semantic relativism; and present a unified framework for the automatic maintenance of derived data. This framework is based on the transformational techniques of finite differencing in which repeated costly computations are replaced by more efficient incremental counterparts. In addition to the importance of our incremental maintenance approach for supporting alternative views of the same data, additional applications of our incremental maintenance approach to the implementation of summary data, integrity control, and triggers are discussed in chapter 4.
-
Ph.D. Thesis
1981
Upper and Lower Bounds on the Performance of Parallel Algorithms
Kruskal, Clyde Philip
Abstract
|
PDF
Title: Upper and Lower Bounds on the Performance of Parallel Algorithms
Candidate: Kruskal, Clyde Philip
Advisor(s): Schwartz, Jacob T.
Abstract:
With the advent of VLSI, new opportunities in computer architecture are emerging. Parallel processors composed of many thousands of PEs will soon be practical. In this thesis, we derive both upper and lower bounds for parallel algorithms. Our analyses emphasize two specific models of parallel computation--the ultracomputer and the paracomputer--but the general ideas and many of the results are much more widely applicable. We present general lower bounds for solving a wide class of problems on direct connection machines, and a sharper lower bound for effecting permutations. This latter bound shows that the permutation problem is not completely parallelizable on any direct connection machine that is not almost completely connected. In addition, using a very general model of parallel computation, we study the worst case time complexity of searching in parallel. We then present a large collection of basic algorithms for both the ultracomputer and the paracomputer. Since the performances of many of these algorithms achieve the lower bounds mentioned above, both models are extremely effective parallel computer systems. Finally, a systematic method for generalizing any dependent-size algorithm to an independent-size one is given.
-
Ph.D. Thesis
1980
The Transformational Approach to the Development and Verification of Programs in a very High Level Language
Deak, Edith Gail
Abstract
|
PDF
Title: The Transformational Approach to the Development and Verification of Programs in a very High Level Language
Candidate: Deak, Edith Gail
Abstract:
In informal exposition, the correctness of a complex algorithm is often demonstrated by deriving it through successive refinement steps from a high level specification, and supplying proofs of the underlying principles used in the process. However, most existing mechanical program verifiers ignore this standard expository practice, and are generally designed to verify programs written in a low level form. While logically simple algorithms can be handled adequately in this manner, attempting to verify more complex algorithms at a low level requires treatment of implementation details which obscure the main arguments of the verification. This thesis describes a systematic technique for proving algorithms correct using a transformational approach, and presents a detailed transformation/verification scenario of the proof of a variety of complex combinatorial algorithms. The algorithms treated here are considerably more involved than those verified by other methods. The programming language used is a variant of SETL, adapted for program verification, which provides a medium for high level specification. A program P is annotated with logical formulae of set theory, which are called assumptions and assertions. P is said to by partially correct if every computation which satisfies all assumptions also satisfies all assertions. In order to prove the correctness of P, which initially contains only assumptions, we apply proof rules which are used both to transform the program into logical formulae called verification conditions and then to prove these verification conditions. The transformation rules are unique in that they enable the combination of correct program fragments. We are able to reuse general code fragments in a variety of contexts without reproof and to derive several different low level algorithms from a single high level algorithm. The transformations often require proof of enabling conditions. In such cases, when a transformation is performed, the enabling condition is introduced into the program text as an assumption which must be verified in turn using the proof mechanism described above.
-
Ph.D. Thesis
1980
An Implementation for Gyve: a Language for Concurrent Processing
Meyer, Jeanine Marietta
Abstract
|
PDF
Title: An Implementation for Gyve: a Language for Concurrent Processing
Candidate: Meyer, Jeanine Marietta
Abstract:
This thesis presents a design for implementing a programming language, called GYVE, for specifying groups of concurrent processes such as operating systems. GYVE was designed by Philip Shaw and is described in his dissertation (New York University, 1978). Important features of GYVE include compile time protection checking, explicit scheduling of processes and a dynamic destroy function. The present work contains a detailed review of most of the constructs of GYVE and discussion of how various features could be modified so as to ease the implementation and/or increase performance in certain situations. One such feature concerns accessing of shared objects. This thesis specifies the syntactic and semantic phases of a GYVE compiler and the runtime structures and procedures required for execution of output from the compiler. Included with the specification is a reconcilation of the definition of GYVE implicit in the implementation with the formal definition of Shaw. Shaw gives his formal definition of the compilation process in the form of a two-level grammar. This is compared with the BNF-based syntactic and semantic phases of the implementation. Shaw's runtime system is specified through procedures written in GYVE. The specification code of the implementation is in a low level form of SETL in which we refer to various system tables of fixed sizes, machines with finite storage, semaphores and a simple timer mechanism. An analysis is given of the use of semaphores as required by the existence of the destroy function and the desire to prevent deadlock.
-
Ph.D. Thesis
1980
Optimization of Inductive Assertions
Warren, Jr., Henry Stanley
Abstract
|
PDF
Title: Optimization of Inductive Assertions
Candidate: Warren, Jr., Henry Stanley
Abstract:
Inductive assertions are assertions placed in the loops of a program, primarily for the purpose of aiding a mechanical correctness prover to prove that the program is correct. Here we assume that the assertions in a program are executed along with the program. That is, the predicate expression of each assertion is evaluated when encountered during program execution, to verify that its value is true. Inductive assertions are particularly expensive in terms of execution time. This is not only because they are in loops, but also because they are frequently themselves loops (quantified expressions). Thus executing them can slow a program's execution by a factor that can be indefinitely large. For example, executing them can change an O(n('2)) process to an O(n('3)) process. This thesis investigates the possibility of optimizing such quantified inductive assertions by substantially reducing the range of quantification. It is shown that many inductive assertions encountered in practice fall into a simple pattern in which the quantifier may, essentially, be removed. This restores the execution time of the program to the same order of magnitude that it would have been if the inductive assertions were not executed. Emphasis is placed on methods that are no more costly in compiler size and execution time than conventional global optimization techniques.
-
Ph.D. Thesis
1979
On Quadtrees, Voronoi Diagrams, and Lattices: Results in Geometric Algorithms
Bennett, Huxley
Abstract
|
PDF
Title: On Quadtrees, Voronoi Diagrams, and Lattices: Results in Geometric Algorithms
Candidate: Bennett, Huxley
Advisor(s): Chee Yap
Abstract:
We present several results on geometric algorithms, and somewhat more specifically on algorithmic aspects of geometric structures including quadtrees, Voronoi diagrams, and lattices. Our work contains two parts, the first of which is on subdivision algorithms, and the second of which is on lattice algorithms.
Subdivision algorithms amount to recursively splitting an ambient space into smaller pieces until certain conditions hold. Often the underlying space is a square in the plane (or a box in higher dimensions), whose subdivision is represented by a quadtree (or its higher-dimensional analogs). A quadtree is smooth if any two adjacent leaf boxes differ by at most one in depth. We first study the cost of the smooth split operation in quadtrees, showing that it has constant amortized cost in quadtrees of any fixed dimension.
We then present a subdivision-based algorithm for computing isotopic epsilon-approximations of planar minimization diagrams. Given a family of continuous functions, its minimization diagram partitions the plane into regions on which each function is minimal. Minimization diagrams generalize many natural Voronoi diagrams, and we show how to use our framework to compute an anisotropic Voronoi diagram on polygonal sites. We have implemented a prototype of our algorithm for anisotropic Voronoi diagrams, and we provide experimental results.
We then turn to studying lattice algorithms. A lattice is a regular ordering of points in Euclidean space, which is represented as the set of all integer combinations of some linearly independent vectors (which we call a basis of the lattice). In our first work on lattices, we introduce and study the Lattice Distortion Problem (LDP). LDP asks how "similar" two lattices are, i.e., what the minimum distortion of a linear bijection between two lattices is. We show how to compute low-distortion mappings with a tradeoff between approximation quality and running time based on a notion of basis reduction introduced by Seysen (Combinatorica 1993). We also show that LDP is NP-hard to approximate to within any constant factor (under randomized reductions).
Finally, we study the problem of finding lattice bases which are optimal with respect to two basis quality measures. Namely, we study the problem of finding bases with minimal orthogonality defect, and with nearly minimal Seysen condition number. We give algorithms which solve both problems while running in time depending only on the rank of the lattice times a polynomial in the input length.
- Ph.D. Thesis 1979 Automatic Storage Optimization Fabri, Janet Abstract | PDF
- Ph.D. Thesis 1979 The Optimization of Horizontal Microcode within and Beyond Basic Blocks: an Application of Processor Scheduling with Resources Fisher, Joseph Allen Abstract | PDF
- Ph.D. Thesis 1979 On the Complexity of the Satisfiability Problem Goldberg, Allen T. Abstract | PDF
- Ph.D. Thesis 1979 Computing Chromatic Polynomials for Special Families of Graphs Loerinc, Beatrice Margaret Abstract | PDF
- Ph.D. Thesis 1979 Expression Continuity and the Formal Differentiation of Algorithms Paige, Robert Allan Abstract | PDF
- Ph.D. Thesis 1979 Comparison of Direct Code Generation and Intermediate Language Generationfor Bootstrapping the Machine-Independent Compiler, Little Schneck, Paul Bennett Abstract | PDF
- Ph.D. Thesis 1979 Groups with Solvable Word Problems Semeniuk, Christine Abstract | PDF
- Ph.D. Thesis 1979 Automatic Discovery of Heuristics for Nondeterministic Programs from Sample Execution Traces Stolfo, Salvatore Joseph Abstract | PDF
- Ph.D. Thesis 1978 Decision Regions for Multi-Stage Allocation Problems Coppage, Samuel Francis, Jr. Abstract | PDF
- Ph.D. Thesis 1978 Configurable Software for Satellite Graphics Hartzman, Peter David Abstract | PDF
- Ph.D. Thesis 1978 Automatic Data Structure Choice in Setl Liu, Ssu-Cheng Abstract | PDF
- Ph.D. Thesis 1978 Gyve, a Programming Language for Protection and Control in a Concurrent Processing Environment Shaw, Philip Sidell Abstract | PDF
-
Ph.D. Thesis
1977
Computer Reconstruction of Bodies Bounded by Quadric Surfaces from a Set of Imperfect Projections
Shapira, Ruth
Abstract
|
PDF
Title: Computer Reconstruction of Bodies Bounded by Quadric Surfaces from a Set of Imperfect Projections
Candidate: Shapira, Ruth
Abstract:
This thesis describes a computer program for constructing a description of solid bodies from a set of n pictures of the bodies. The bodies are assumed to be bounded by faces which are quadric or planar, and they are restricted to have all their vertices formed by exactly three faces. The pictures are taken from different vantage points, with the restriction that a slight shift in vantage point will not alter the topology of the picture. It is assumed that the program receives outline information from a preprocessor which has extracted this information from the pictures. The outline information (set of line structures) may be imperfect in that some junctions may be erroneously reported and some lines may be missing. However, all lines due to shadows are assumed to have been eliminated by the preprocessor. The thesis includes a technique for establishing the validity of the junctions presented by the preprocessor as well as for matching corresponding features in the line structures derived from the different pictures. New grammar rules for line-drawing projections of curved and planar solid bodies are developed. These are useful in parsing the line drawings. They have also led to the definition of a new family of impossible objects. The program works simultaneously with all the available line structures. The parsing of every line structure is supported dynamically by the results gotten thus far from the parsing of the other line structures. Through the parsing of the line structures, the use of picture comparison and the application of the grammar rules, many of the preprocessor errors are detected and partly corrected. The program also can provide feedback to the preprocessor in the form of suggestions as to where to look again for lines in the pictures. The program utilizes the extracted line structures corresponding to the different bodies in all the pictures to determine the set of faces (insofar as possible) for every body. Every face is defined by an ordered set of n-tuples. The n-tuples are the matched lines and junctions in the n different pictures. The three-dimensional coordinates of the vertices and the equations of the faces can then be determined from these n-tuples. The program was written in PL/I and has been tested on several scenes.
- Ph.D. Thesis 1976 On Algorithms for Minimizing the Number of Multiplications in Matrix Products Laderman, Julian David Abstract | PDF
- Ph.D. Thesis 1976 A Comprehensive Survey of Parsing Algorithms for Programming Languages Owens, Philip Jonathan Abstract | PDF
- Ph.D. Thesis 1976 Programming of Mechanism Motions Spegel, Marjan Abstract | PDF
- Ph.D. Thesis 1976 Inferential Learning through Counterexample Construction Sperling, Michael Zelig Abstract | PDF
- Ph.D. Thesis 1975 Operating System Specification using very High Level Dictions Markstein, Peter Willy Abstract | PDF
- Ph.D. Thesis 1975 Visual Information Processing of Isolated Character Inputs Stryker, Charles William Abstract | PDF
- Ph.D. Thesis 1975 An Investigation into a Probability Model for Correct Target Letter Detection Teichman, Sheldon M. Abstract | PDF
- Ph.D. Thesis 1975 A Computer Based Approach to some Geometric Aspects of Character Recognition Wilamowsky, Yonah Abstract | PDF
- Ph.D. Thesis 1974 Investigations in the Theory of Descriptive Complexity Gewirtz, William Lawrence Abstract | PDF
- Ph.D. Thesis 1974 A Metalanguage for Expressing Grammatical Restrictions in Nodal Spans Parsing of Natural-Language Hobbs, Jerry Robert Abstract | PDF
- Ph.D. Thesis 1974 Computer Edge Extraction from Photographs of Curved Objects Ramer, Eugen Urs Abstract | PDF
- Ph.D. Thesis 1974 Optimum Correction of Pincushion Distortion Takeuchi, Seiichi Abstract | PDF
- Ph.D. Thesis 1974 Type Determination for very High Level Languages Tenenbaum, Aaron Melvin Abstract | PDF
- Ph.D. Thesis 1973 Recursive Compiler-Optimization for Nonserial Program Graphs Agresti, William Wolfgang Abstract | PDF
- Ph.D. Thesis 1973 Studies in Pattern Recognition of Line-Size, Line-Orientation and their Interaction Friedmann, Jehosua Abstract | PDF
- Ph.D. Thesis 1973 Computer Recognition of Handprinted Two-Dimensional Mathematics Grossman, Fred Abstract | PDF
- Ph.D. Thesis 1973 Sub-Elementary Classes of Functions and Relations Harrow, Keith Abstract | PDF
- Ph.D. Thesis 1973 A Study in Programming Techniques Maly, Kurt Abstract | PDF
- Ph.D. Thesis 1973 A Comparison of some Deadlock Models Waxman, Jerry Milton Abstract | PDF
- Ph.D. Thesis 1972 An Experimental Comparison of the Efficiency of Parsing Techniques Knobe, Bruce Stuart Abstract | PDF
- Ph.D. Thesis 1972 Digital Computer Transformations for Irregular Line-Drawings Reggiori, Giovanni B. Abstract | PDF
- Ph.D. Thesis 1971 A Network Queueing Model of a Multiprogrammed Time-Shared Computer System Brown, Theodore David Abstract | PDF
- Ph.D. Thesis 1971 Parallel Programming: Operational Model and Detection of Parallelism Firestone, Roger Morris Abstract | PDF
- Ph.D. Thesis 1971 Global Flow Analysis and Register Allocation for Simple Code Structures Kennedy, Kenneth Wade, Jr. Abstract | PDF
- Ph.D. Thesis 1971 Reconstruction of Polyhedra from Sets of their Perspective Projections Rabinowitz, Andrew David Abstract | PDF
- Ph.D. Thesis 1971 A Trainable Syntactic Model for Syntax Specification and Recognition of Handdrawn Two-Dimensional Patterns Sharma, Onkar P. Abstract | PDF
- Ph.D. Thesis 1971 A Systematic Method for the Creation of Data Structures in Computer Graphics Applications Williams, Robin Abstract | PDF
- Ph.D. Thesis 1971 A Computer Procedure for Generating Visible-Line Drawings of Solids Bounded by Quadric Surfaces Woon, Peter Yi-do Abstract | PDF
- Ph.D. Thesis 1970 The Optimum Two-Dimensional Allocation of Irregular, Multiply-Connected Shapes with Linear, Logical and Geometric Constraints Adamowicz, Michael Abstract | PDF