Theses & Reports
Instructions for submitting a technical report or thesis.
You can find technical reports published prior to 1990 archived here.
-
Ph.D. Thesis
1989
Combinatorial and algorithmic analysis of space decomposition problems
Aronov, Boris
Abstract
|
PDF
Title: Combinatorial and algorithmic analysis of space decomposition problems
Candidate: Aronov, Boris
Advisor(s): Sharir Micha
Abstract:
The first part of the thesis studies geodesic Voronoi diagrams. The closest-site (respectively, furthest-site) Voronoi diagram of a finite set of sites in Euclidean space is a classical geometric structure, which partitions the space into a set of Voronoi cells, each associated with a site, so that any point in the cell of site s is closer to s (resp. further from s) than to any other site. The structure of such diagrams for point sites in the plane has been completely characterized and well-known efficient algorithms exist for computing them. Voronoi diagrams have been generalized by replacing the Euclidean distance by a more general metric and/or relaxing the assumption that sites be single points. We consider the closest- and the furthest-site Voronoi diagrams for a set of k point sites in a simple n-gon, defined by the internal geodesic distance inside the polygon. We demonstrate that the planar map defined by either diagram is comprised of O(n + k) features of bounded complexity each and describe nearly optimal algorithms for constructing the two Voronoi diagrams. Namely, the closest-site geodesic Voronoi diagram can be computed in time O((n + k)log(n + k)log n), while O((n + k)log(n + k)) time is sufficient for the furthest-site diagram. The second part of the thesis analyzes the structure of an arrangement of flat triangles in 3-space. The combined combinatorial complexity of all non-convex cells (i.e., non-convex components of the complement of the union of the triangles), maximized over all arrangements of n triangles is shown to be roughly O($n\sp{7\over 3}$), improving the best previously known upper bound of O($n\sp{3-{1\over 49}}$) for a smaller quantity--the maximum combinatorial complexity of a single cell. Our result has applications to algorithmic motion planning, stemming from the well-known technique that transforms a polyhedral body translating in a polyhedral environment into a collection of convex polygonal plates in three-dimensional space; the set of placements of the body reachable from a starting configuration along a collision-free path corresponds to a cell in the arrangement of these plates. Thus analyzing the maximum combinatorial complexity of a single cell and obtaining a comparably efficient algorithm for its calculation constitutes a satisfactory solution to the translational motion planning just mentioned. To this end, we also consider the problem of computing a single cell or a subset of cells in a three-dimensional arrangement of triangles, providing a nearly worst-case optimal randomized algorithm for solving the former problem and a less efficient procedure for the latter. In addition, we examine a few special classes of arrangements for which better estimates on the maximum single-cell complexity can be deduced and where computing a cell or any collection of cells appears easier.
-
Ph.D. Thesis
1989
Data communication in robot control systems
Clark, Dayton R., Jr.
Abstract
|
PDF
Title: Data communication in robot control systems
Candidate: Clark, Dayton R., Jr.
Advisor(s): Mishra, Bud
Abstract:
Robots and robot controllers are becoming more sophisticated. Consequently, the demands on the controller's operating system are increasing. The lower levels of robot control systems (indeed, most real-time control systems) are characterized by servo loops. This thesis examines servo loops and how they affect data communications within robot control systems. In the two systems described in this thesis the special characteristics of servo loops are exploited to enhance the data communications. H scIC is an operating system for hierarchies of servo loops. It uses rate monotonic scheduling for the periodic servo loop processes. H sc IC events (or processes) which are used to implement servo loops are not allowed to block. They will only surrender the processor upon completion or when preempted by a higher priority process. A non-blocking communication structure, Periodic Data Buffers (PDB's) was developed for inter-process communication. H scIC has been implemented and is used successfully in a controller for the Utah/MIT hand. G scANGLIA is a proposed real-time communication network. It is intended to allow the processors in a robot controller to be distributed within the robot. Thus the processors can be close to the sensors and actuators they control. Much of the traffic on such a network would be periodic. G scANGLIA uses a central controller which allocates access to the network. For the periodic traffic a fixed schedule, produced off-line, is used. For the aperiodic traffic round-robin polling is used. Unlike most protocols, messages do not contain the address of the destination node. Instead, the messages are labeled with the name of its contents. Each node examines each message and decides whether or not it is interested in the message. A special communication controller in each node (the Communication Memory Management Unit) examines and selects the messages. The result of this protocol is a network-wide common memory. In this thesis, the G scANGLIA protocol is described in detail and some preliminary analysis of its effectiveness in some real robot systems is given.
-
Ph.D. Thesis
1989
On-line motion planning
Cox, James L.
Abstract
|
PDF
Title: On-line motion planning
Candidate: Cox, James L.
Advisor(s): Yap, Chee
Abstract:
In this thesis we investigate the area of online or exploratory motion planning. In this thesis we develop algorithms for planning the motion of a planar rod or ladder and a three link planar arm moving amidst an environment containing obstacles bounded by simple, closed polygons. The exact shape, number and location of the obstacles is assumed unknown to the planning algorithm, which can only obtain information about the obstacles by detecting points of contact with the obstacles. The ability to detect contact with obstacles is formalized by move primitives that we call guarded moves. We call ours the online motion planning problem as opposed to the usual offline version. This is a significant departure form the usual setting for motion planning problems, in which the algorithm is given an explicit description of the scene as part of its input. What we demonstrate is that the retraction method can be applied, although new issues arise that have no counterparts in the usual setting. For the rod we are able to obtain an algorithm with path complexity ($O(m) = O(n\sp2)$ guarded moves, where $n$ is the number of obstacle walls, and $m$ the number of pairs of obstacle walls and corners of distance less than or equal to the length of the ladder) that matches the known lower bound (Ork85). This lower bound holds for both the online and offline (where the environment is explicitly given) versions of the problem. The computational complexity of the algorithm $O(m$ log $n)$ matches the best known algorithm (SfS) for the offline version. For the arm we are able to obtain an algorithm with path complexity that is $O(m) = O(n\sp3)$ where $n$ is the number of obstacle walls and $m$ is the number of pairs of obstacle features that the linkage can simultaneously contact. The computational complexity is $O(n\sp3$log $n$). Also our constraint based approach can be extended to obtain algorithms for $k > 3$ link arms that are polynomial for each $k$. That is, if $k$ is fixed the complexity is proportional to $n\sp{k}$.
-
Ph.D. Thesis
1989
Quantitative analysis of problems in computer algebra: Grobner bases and the Nullstellensatz
Dube, Thomas William
Abstract
|
PDF
Title: Quantitative analysis of problems in computer algebra: Grobner bases and the Nullstellensatz
Candidate: Dube, Thomas William
Advisor(s): Yap, Chee
Abstract:
This thesis presents new quantitative results concerning multi-variate polynomial ideals. Since these ideals are the basic objects of (computational) algebraic geometry, these results have important ramifications in algebraic algorithms, particularly in the solving of simultaneous equations. Furthermore, all the new theorems are proven using only constructive techniques and basic algebra. In many cases, the proofs provide algorithms for constructing the objects which the theorems describe. Among the results assembled here, three are of particular importance. The first shows that every ideal and residue class ring can be decomposed into simple pieces called cones. Next, the cone decomposition is used to produce a new upper bound on the degree of polynomials which appear in a reduced Grobner basis. Finally, a new tight upper bound for the exponent in Hilbert's Nullstellensatz is demonstrated.
-
Ph.D. Thesis
1989
SMARTS--Shared-memory Multiprocessor Ada Run Time Supervisor
Flynn-Hummel, Susan Frances
Abstract
|
PDF
Title: SMARTS--Shared-memory Multiprocessor Ada Run Time Supervisor
Candidate: Flynn-Hummel, Susan Frances
Advisor(s): Schonberg, Edmond
Abstract:
The programming language Ada is primarily intended for the construction of large scale and real time systems. Although the tasking model of Ada was aimed mainly at embedded systems, its rich set of synchronization operators together with its support for programming in the large, make Ada increasingly attractive for writing inherently parallel, computationally intensive, numeric and symbolic applications. Highly parallel shared-memory MIMD machines such as the NYU Ultracomputer have traditionally been regarded as suitable for large-scale scientific code, and not for more symbolic or heterogeneous concurrent applications such as are found in Artificial Intelligence or real-time programming. However, these applications would benefit greatly from (and even require) the computational power provided by highly parallel machines. It is therefore desirable to develop Ada implementations for highly parallel machines. The concern has been that the cost of managing large numbers of Ada tasks will negate the speedup obtained from their parallel execution. Indeed, a run-time supervisor for Ada must contend with many potentially expensive serialization points, that is to say, constructs that may take time proportional to the number of tasks involved. In this thesis we show that a run-time supervisor for an implementation of Ada on highly parallel machines can be written which is free of costly serialization points. The run-time supervisor SMARTS (Shared-memory Multiprocessor Ada Run Time Supervisor) depends on the hardware synchronization primitive $fetch\&\Phi$, and supports the tasking features of Ada in a highly parallel manner. We further reduce the overhead of Ada tasking, by means of micro-tasking, i.e. the explicit scheduling of a family of Ada tasks on a specified number of processors. Thus, Ada tasks are implemented as light weight processes managed by SMARTS, rather than full blown operating systems processes. Finally, SMARTS implements Ada shared variables efficiently by means of relay sets. Relay sets not only provide a means for identifying and resolving references to shared variables, but also facilitate the implementation of the Ada rendezvous mechanism as a remote procedure call.
-
Ph.D. Thesis
1989
A computational treatment of the comparative
Friedman, Carol
Abstract
|
PDF
Title: A computational treatment of the comparative
Candidate: Friedman, Carol
Advisor(s): Grishman, Ralph
Abstract:
This thesis develops a computational treatment of the comparative in English that is general, efficient, and relatively easy to implement, while not unduly complicating the natural language processing system. Implementation was accomplished using the Proteus Question Answering System, which translates natural language questions into database queries. The comparative is a particularly difficult language structure to process, and presently only a few natural language systems handle it in limited ways. However, the comparative is an essential component of language that frequently occurs in discourse. The comparative is difficult to process because it corresponds to an amazingly diverse range of syntactic forms such as coordinate and subordinate conjunctions and relative clauses which are also very complex and often contain missing elements. Semantically, the comparative is cross-categorical: adjectives, quantifiers, and adverbs can have the comparative feature. The semantics of the comparative has to be consistent with that of different linguistic categories while retaining its own unique characteristics. The computational approach of this thesis is based on a language model which contains functionally independent syntactic, semantic, and pragmatic components. Although the comparative relates to all the components, the syntactic component is the one that is mainly affected. The syntactic stage of processing analyzes and regularizes the comparative structures. The analysis process utilizes existing mechanisms that handle structures similar to the comparative. The regularization process transforms all the different comparative structures into one standard form consisting of a comparative operator and two complete clauses. This process consists of two phases: the first uses a compositional approach based on Montague-style translation rules. The subsequent phase uses specialized procedures to complete the regularization process by expanding the comparative, filling in missing elements, and providing the appropriate quantified terms associated with the comparated elements. After the comparative is regularized, the remaining stages of processing are hardly affected. Each clause of the comparative is processed using the same procedures as usual, and only minor modifications are required specifically for the comparative.
-
Ph.D. Thesis
1989
Verification of three-dimensional model parameters from two-dimensional image data
Goldberg, Robert Raphael
Abstract
|
PDF
Title: Verification of three-dimensional model parameters from two-dimensional image data
Candidate: Goldberg, Robert Raphael
Advisor(s): Lowe, David
Abstract:
A unified approach is presented for instantiating model and camera parameters in the verification process of visual recognition. Recognition implies the generation of a hypothesis, a map between projected model data and image data. An important part of the problem remaining is the instantiation of model and camera parameters to verify the hypothesis. We present this camera pose determination as a non-linear least squares problem, with functions minimizing distance between the projected model and image data. This approach treats both camera and model parameters as the same, simplifying the camera/sensor calibration problem. Coordinate trees with null components, an original data structure, models the objects in the image. This allows the calculation of analytical partial derivatives (with respect to the parameters of model and camera). We discuss objective model functions that best suit general applications. The incorporation of various numeric techniques is analyzed, with tables displaying convergence results for various models and parameters. Good convergence results are obtained and this method can be integrated into general vision applications. No depth information is required, and the algorithms also hold in noisy images, adding much robustness to our techniques. A natural extension of these techniques is to instantiate the parameters of generally constrained models.
-
Ph.D. Thesis
1989
Topics in algebraic computing: Subresultants, GCD, factoring and primary ideal decomposition
Ho, Chung-Jen
Abstract
|
PDF
Title: Topics in algebraic computing: Subresultants, GCD, factoring and primary ideal decomposition
Candidate: Ho, Chung-Jen
Advisor(s): Yap, Chee
Abstract:
Our goal is to present an algorithm for computing a primary decomposition of a zero-dimensional ideal. We compute the decomposition of the radical ideal of the zero-dimensional ideal and lift it to a primary decomposition. The algorithm for decomposing radicals simply uses Kronecker's method of elimination and GCD and factoring algorithms. Kronecker's method of elimination and GCD computations are related to resultant systems and subresultants. Thus, we first investigate the theory of subresultants. We expound the theory of subresultants along the lines suggested by Loos. However, there were some major oversights in Loos's proof of the Subresultant Theorem. We point out where exactly Loos's proof fails and give a correct version of proofs. Then, we define the Sylvester matrix of many polynomials and explore the properties of the Sylvester matrix. By these properties, we derive fast parallel algorithms for computing the GCD of many polynomials. Our algorithms have better processor bound than Von zur Gathen's algorithm. Moreover, one of the algorithms uses no divisions. The factoring algorithm deals with factoring polynomials over multiple algebraic extensions of rational number field. We present an algorithm to find an integer $D$ such that the defect of an integral basis for a multiple extension of Q divides $D$. Though there is a naive algorithm to find a $D$ by translating a multiple extension to a simple extension, our algorithm has much better time and space bound than the naive algorithm. With this result, we can directly factor polynomials without translating a multiple extension to a simple extension. Finally, we improve Kronecker's method of elimination; and then, by applying the GCD and factoring algorithms on the resultant systems generated by Kronecker's method of elimination, we obtain a tree representation of all the associated prime ideals belonging to the zero-dimensional ideal.
-
Ph.D. Thesis
1989
Object recognition by geometric hashing
Lamdan, Yehezkel
Abstract
|
PDF
Title: Object recognition by geometric hashing
Candidate: Lamdan, Yehezkel
Advisor(s): Schwartz, Jacob T.; Wolfson, Haim J.
Abstract:
This thesis proposes a general and efficient model-based object recognition scheme. The scheme addresses the problem of identifying instances of model objects in single images. The model objects are two or three dimensional, and their instances in the scene might be overlapping and partially occluded by other unknown objects. The camera viewpoint is unknown and assumed to be arbitrary. The images can be two dimensional intensity images or three dimensional range images. The scheme deals uniformly with all feasible imaging transformations, from the simplest case of pure translation to the most complex case of the perspective transformation. The proposed method is based on geometric hashing. It hypothesizes model to scene transformations based on corresponding model and scene feature subsets. These subsets have the minimal cardinality, which still allow to recover the imaging transformation for a given transformation type. In order to prune the search space of all model and scene feature subset pairs, a hashing scheme is used. It is based on geometrical relations among the object features, which are invariant under the given transformation type. The recognition algorithm has two major steps. First, a hash-table, encoding the geometrical invariants of the model features, is prepared. This stage is independent of the scenes to be later processed, and can be executed off-line. In the second stage, an efficient matching algorithm is performed, which utilizes the previously prepared hash-table. The efficacy of the recognition is achieved by considering only those model and scene subsets, which are 'similar' under the given transformation type. The algorithm was tested in 'real-life' situations for the important cases of recognizing flat and solid objects in the 3D world, using the weak perspective approximation to the perspective transformation.
-
Ph.D. Thesis
1989
Mapping algorithms on regular parallel architectures
Lee, PeiZong
Abstract
|
PDF
Title: Mapping algorithms on regular parallel architectures
Candidate: Lee, PeiZong
Advisor(s): Kedem, Zvi
Abstract:
It is significant that many of time-intensive scientific algorithms are formulated as nested loops, which are inherently regularly structured. In this dissertation the relations between the mathematical structure of nested loop algorithms and the architectural capabilities required for their parallel execution are studied. The architectural model considered in depth is that of an arbitrary dimensional systolic array. The mathematical structure of the algorithm is characterized by classifying its data-dependence vectors according to the new ZERO-ONE-INFINITE property introduced. Using this classification, the first complete set of necessary and sufficient conditions for correct transformation of a nested loop algorithm onto a given systolic array of an arbitrary dimension by means of linear mappings is derived. Practical methods to derive optimal or suboptimal systolic array implementations are also provided. The techniques developed are used constructively to develop families of implementations satisfying various optimization criteria and to design programmable arrays efficiently executing classes of algorithms. In addition, a Computer-Aided Design system running on SUN workstations has been implemented to help in the design. The methodology, which deals with general algorithms, is illustrated by synthesizing linear and planar systolic array algorithms for matrix multiplication, a reindexed Warshall-Floyd transitive closure algorithm, and the longest common subsequence algorithm.
-
Ph.D. Thesis
1989
Transformations for backtracking SETL programs
Nathan, Albert
Abstract
|
PDF
Title: Transformations for backtracking SETL programs
Candidate: Nathan, Albert
Advisor(s): Dewar, Robert
Abstract:
We study program transformations for a class of combinatorial search problems whose solutions are usually found by backtrack searching. High-level algorithms for such problems can be elegantly specified using SETL's backtracking primitives ok and fail, for which we give a more formal and precise semantic definition than the one which currently exists. Then we explore two types of transformations applicable to such specifications. First, we derive Finite Differencing transformations which reduce the amount of computation performed at each node of the search tree. Though the formal derivation of these transformations is somewhat lengthy, the net results are simple and easily understood. In the process of deriving the transformations, we also expose some difficulties encountered when applying Finite Differencing methods to programs which use ok/fail. Second, we propose two general transformations which reduce the size of the search tree generated by pruning subtrees which are guaranteed to fail. The first one is based on the idea of using knowledge accumulated during the search to guide the search, while the second one prunes subtrees which contain no paths of sufficient length needed to extend the current partial solution to a complete solution. For each filter, we describe its enabling conditions, give a high-level specification, and then formally derive an efficient implementation using Finite Differencing. Finally, we suggest suitable representations, based on SETL's Data Representation Sublanguage, for implementing the data structures used in our transformations. We demonstrate the effectiveness of all these transformations by programming some familiar backtrack-search problems and comparing the running times and number of nodes generated in the transformed versions against those of the original specification. We also show some papers from the literature in which some suggestion of these transformations does appear, but in which (in contrast to this work) no formal demonstration of their correctness or applicability to other problem domains is given.
-
Ph.D. Thesis
1989
Optimization and garbage collection in Ada programs on shared memory computers
Operowsky, Howard Lawrence
Abstract
|
PDF
Title: Optimization and garbage collection in Ada programs on shared memory computers
Candidate: Operowsky, Howard Lawrence
Advisor(s): Schonberg, Edmond
Abstract:
Compiler development for Ada is still in its infancy. Despite its goal of supporting embedded systems in an efficient manner, Ada programs still tend to be large and slow. In this thesis, we investigate three issues related to the efficient implementation of Ada programs: run-time representation of types and objects, reduction of run-time constraint checking, and parallel garbage collection on a shared memory multiprocessor. We present a collection of type templates for scalar and composite types which are storage-efficient and allow for efficient object code to be produced by the code generator. We present an algorithm for constructing these templates at run-time when constraint information is unavailable at compile-time. We show that a global optimizer is not required to reduce the overhead of constraint checking in Ada programs. We present a series of data-flow equations for available expressions and use them as the basis for a simple algorithm to eliminate redundant constraint checks. The algorithm is syntax-directed and is executed in a single pass over the source program's abstract syntax tree. No control flow analysis is required. Our algorithm also includes constant propagation using an extended framework and induction variable analysis. Because the algorithm operates on the abstract syntax tree, induction variable analysis is simplified. Although programs with goto statements are not considered, the exit statement is handled fully. We also examine the effects of shared variables and exception handling. No commercial compiler for Ada currently performs garbage collection. We examine the difficulties in garbage collection presented by Ada and present practical algorithms for Ada on shared memory multiprocessors. We extend Kung and Song's on-the-fly garbage collection algorithm to support multiple tasks on the NYU Ultracomputer/IBM RP3 computers. We prove that no additional synchronization is required because of Ada's rules on the use of shared variables.
-
Ph.D. Thesis
1989
Using relational discrete event systems and models for prediction of future behavior of databases
Tuzhilin, Alexander Sergei
Abstract
|
PDF
Title: Using relational discrete event systems and models for prediction of future behavior of databases
Candidate: Tuzhilin, Alexander Sergei
Advisor(s): Kedem, Zvi
Abstract:
The following prediction problem is studied in this dissertation: given a specification of the future behavior of a system and the current state of the system described with a relational database, predict what will happen to the system in the future. The behavior is defined in terms of Relational Discrete Event Systems (RDESes) and Models (RDEMs). An RDES is a set of possible non-deterministic trajectories of future states of a system. An RDEM is a finite formal description of a generally infinite RDES set. Various production system RDEMs and a recurrence equation RDEM are defined and formally compared in terms of expressive power in this dissertation. It is shown that one of the production system RDEMs is better than other considered RDEMs not only in terms of expressive power but in other respects as well. Also, the suitability of various control strategies to restrict non-determinism and improve system's performance is considered. In order to obtain predictions about possible future states of a database, Predictive Query Language (PQL) is defined with the syntax based on a predicate temporal logic and the semantics on RDEM models. It is shown how PQL is related to relational queries for Datalog and its extensions. Finally, the prototype of the Cassandra system is described. Cassandra supports PQL with the semantics based on a production system RDEM. An example of a small Flexible Manufacturing System is used throughout the dissertation to illustrate various points about the described methods.
-
Ph.D. Thesis
1989
Fuzzy disk modeling and rendering of textured complex three-dimensional surfaces of real objects
Yang, Xue Dong
Abstract
|
PDF
Title: Fuzzy disk modeling and rendering of textured complex three-dimensional surfaces of real objects
Candidate: Yang, Xue Dong
Advisor(s): Perlin, Ken; Schwartz, Jacob T.
Abstract:
The three-dimensional geometric modeling in computer graphics is concerned with the representation, specification, and manipulation of free-form curves, surfaces, and volumes. This research explores a model for constructing representations of complex three-dimensional surfaces of real-world objects, such as sculptures in a museum, from sample points acquired with a special 3-D camera, and for synthesizing computer-generated pictures from this model. The difficulty of this problem comes from the complexity of the surface characteristics of such objects, which involve complicated irregular shapes and rich textures. This thesis presents a new three-dimensional surface model - three-dimensional fuzzy disk model, for computer graphics display. This model allows any curved surface to be approximated by a number of overlapping disks. A new blending method has been developed to generate smoothly curved surfaces from the overlapping disks. The shape of a blending surface can be controlled by varying some geometric parameters. This three-dimensional fuzzy disk representation is organized into a multi-resolution structure which allows adaptive refinement of surfaces details and supports coarse-to-fine display process. A scan-line rendering algorithm has been developed to synthesize images from the new model. We also present a simpler, less accurate, but more efficient approximation to the original model. In addition, we present a fast shadow penumbra approximation algorithm capable of generating soft shadows.
-
Ph.D. Thesis
1989
The editing distance between trees: Algorithms and applications
Zhang, KaiZhong
Abstract
|
PDF
Title: The editing distance between trees: Algorithms and applications
Candidate: Zhang, KaiZhong
Advisor(s): Shasha, Dennis
Abstract:
Trees are a ubiquitous building block in computer science and related fields. Examples are grammar parses, image descriptions, secondary structures of RNA molecules, and many other phenomena. Comparing trees is therefore useful to compare scenes, parses, and so on. This thesis presents algorithms for tree comparison and applications of those algorithms. We consider the distance between two labeled trees to be the weighted number of editing operations (insert, delete, and modify) to transform one tree to another. We show that for unordered trees this is a NP-Complete problem. For ordered trees we present a simple fast dynamic programming algorithm that is significantly better than the best previous published algorithms. We then show that our method provides a general technique for solving other related tree problems (e.g. approximate tree matching). We also present efficient parallel algorithms on the assumption that the costs be unit. One of our applications is to compare secondary structures of RNA molecules. We describe another application to vision that uses tree comparisons to compare shapes. We have also implemented some of the algorithms in the form of a tree comparison toolkit. The preliminary version of the toolkit has been used at the U.S. National Cancer Institute for the comparison of RNA secondary structures.