Theses & Reports
Instructions for submitting a technical report or thesis.
You can find technical reports published prior to 1990 archived here.
-
M.S. Thesis
2020
Cooperation and Deception in multi-agent signaling
Enaganti, Inavamsi
Abstract
|
PDF
Title: Cooperation and Deception in multi-agent signaling
Candidate: Enaganti, Inavamsi
Advisor(s): Bhubaneshwar Mishra
Abstract:
We aim to study cooperation and deception in a system with multiple agents through utility and signaling. We start with the classic standard for cooperation namely the ‘Prisoner’s Dilemma’ and then we move on to the ‘Iterated Prisoner’s Dilemma’ which we treat as an iterated version of a signaling Game. This is because the previous actions of an agent are a signal to the opponent about the agent’s type. We then move on to Bio-mimicry and deception where we study dynamics and interesting phenomena that arise due to signaling between Predator and Prey. Cooperation and deception are two sides of a coin and it is imperative to understand both of them as we develop better and more efficient Artificial Intelligence systems.
-
M.S. Thesis
2020
Static Responsibility Analysis of Floating-Point Programs
Saatcioglu, Goktug
Abstract
|
PDF
Title: Static Responsibility Analysis of Floating-Point Programs
Candidate: Saatcioglu, Goktug
Advisor(s): Thomas Wies
Abstract:
The last decade has seen considerable progress in the analysis of floating-point programs. There now exist frameworks to verify both the total amount of round-off error a program accrues and the robustness of floating-point programs. However, there is a lack of static analysis frameworks to identify causes of erroneous behaviors due to the use of floating-point arithmetic. Such errors are both sporadic and triggered by specific inputs or numbers computed by programs. In this work, we introduce a new static analysis by abstract interpretation to define and detect responsible entities for such behaviors in finite precision implementations. Our focus is on identifying causes of test discontinuity where small differences in inputs may lead to large differences in the control flow of programs causing the computed finite precision path to differ from the same ideal computation carried out in real numbers. However, the analysis is not limited to just discontinuity, as any type of error cause can be identified by the framework. We propose to carry out the analysis by a combination of over-approximating forward partitioning semantics and under-approximating backward semantics of programs, which leads to a forward-backward static analysis with iterated intermediate reduction. This gives a way to the design of a tool for helping programmers identify and fix numerical bugs in their programs due to the use of finite-precision numbers. The implementation of this tool is the next step for this work. -
M.S. Thesis
2020
Pointer-Generator Transformers for Morphological Inflection
Singer, Assaf
Abstract
|
PDF
Title: Pointer-Generator Transformers for Morphological Inflection
Candidate: Singer, Assaf
Advisor(s): Kyunghyun Cho
Abstract:
In morphologically rich languages, a word's surface form reflects syntactic and semantic properties such as gender, tense or number. For example, most English nouns have both singular and plural forms (e.g., robot/robots, process/processes), which are known as the inflected forms of the noun. The vocabularies of morphologically rich languages, e.g., German or Spanish, are larger than those of morphologically poor languages, e.g., Chinese, if every surface form is considered an independent token. This motivates the development of models that can deal with inflections by either analyzing or generating them and, thus, alleviate the sparsity problem.
This thesis presents approaches to generate morphological inflections. We cast morphological inflection as a sequence-to-sequence problem and apply different versions of the transformer, a state-of-the art deep learning model, to the task. However, for many languages, the availability of morphological lexicons, and, thus, training data for the task, is a big challenge. In our work, we explore different ways to overcome this: 1. We propose a pointer-generator transformer model to allow easy copying of input characters, which is known to improve performance of neural models in the low-resource setting. 2. We implement a system for the task of unsupervised morphological paradigm completion, where systems produce inflections from raw text alone, without relying on morphological information. 3. We explore multitask training and data hallucination pretraining, two methods
which yield more training examples.With our formulated models and data augmentation methods, we participate in the SIGMORPHON 2020 shared task, and describe the NYU-CUBoulder systems for Task 0 on typologically diverse morphological inflection and Task 2 on unsupervised morphological paradigm completion. Finally, we design a low-resource experiment to show the effectiveness of our proposed approaches for low-resource languages.
-
M.S. Thesis
2020
Data Flow Refinement Type Inference Tool Drift²
Su, Yusen
Abstract
|
PDF
Title: Data Flow Refinement Type Inference Tool Drift²
Candidate: Su, Yusen
Advisor(s): Thomas Wies
Abstract:
Refinement types utilize logical predicate for capturing run-time properties of programs which can be used for program verification. Traditionally, SMT-based checking tools of refinement types such as the implementation of Liquid Types [1] require either heuristics or random sampling logical qualifiers to find the relevant logical predicates.
In this thesis, we describe the implementation of a novel algorithm proposed in Zvonimir Pavlinovic’s PhD thesis "Leveraging Program Analysis for Type Inference" [2], based on the framework of abstract interpretation for inferring refinement types in functional programs. The analysis generalizes Liquid type inference and is parametric with the abstract domain used to express type refinements. The main contribution of this thesis is to achieve the process of instantiating our parametric type analysis and to evaluate the algorithm’s precision and efficiency. Moreover, we describe a tool, called DRIFT², which allows users to select an abstract domain for expressing type refinements and to control the degree to which context-sensitive information is being tracked by the analysis.
Finally, our work compares the precision and efficiency of DRIFT² for different configurations of numerical abstract domains and widening operations [3]. In addition, we compare DRIFT² with existing refinement type inference tools. The experimental results show that our method is both effective and efficient in automatically inferring refinement types. -
M.S. Thesis
2020
Are the proposed similarity metrics also a measure of functional similarity?
Yellapragada, Manikanta Srikar
Abstract
|
PDF
Title: Are the proposed similarity metrics also a measure of functional similarity?
Candidate: Yellapragada, Manikanta Srikar
Advisor(s): Kyunghyun Cho
Abstract:
A recent body of work attempts to understand the behavior and training dynamics of neural networks by analyzing intermediate representations and designing metrics to defi ne the similarity between those representations. We observe that the representations of the last layer could be thought of as the functional output of the model up to that point. In this work, we investigate if the similarity between these representations can be considered a stand-in for the similarity of the networks' output functions. This can have an impact for many downstream tasks, but we specifically analyze it in the context of transfer learning. Consequently, we perform a series of experiments to understand the relationship between the representational similarity and the functional similarity of neural networks. We show in two ways that the leading metric for representational similarity, CKA, does not bear a strict relationship with functional similarity.