Theses & Reports
Instructions for submitting a technical report or thesis.
You can find technical reports published prior to 1990 archived here.
-
Ph.D. Thesis
1986
Three-Dimensional Data Acquisition by Means of the Intensity Ratio Depth Sensor (Vision, Robotics)
Carrihill, Brian Lee
Abstract
|
PDF
Title: Three-Dimensional Data Acquisition by Means of the Intensity Ratio Depth Sensor (Vision, Robotics)
Candidate: Carrihill, Brian Lee
Abstract:
The thesis discusses the acquisition of three-dimensional information by means of the Intensity Ratio Depth Sensor. The Intensity Ratio Depth Sensor uses a structured-light triangulation approach for the measurement of depth from a camera unit to object surfaces in a scene. The device may be viewed as a modification of the plane-of-light scheme in which multiple illumination planes are encoded by intensity ratio values obtained from two or three intensity images. The modification avoids the need to scan the plane of light which, together with the small amount of processing required for the depth calculation, offers a distinct speed advantage over existing schemes. The system design and calibration issues, necessary in obtaining a working Intensity Ratio Depth Sensor, are analyzed. The depth equation (for the transformation of intensity ratio values into depth values) together with four experimental methods for its calculation are presented. The results of the four sensor implementations are given for test scenes. Potential scene dependent and scene independent error sources are discussed. In particular, mutual illumination (illumination resulting from reflections between surfaces elements) is an important scene dependent error source. An analysis of mutual illumination based on a radiative energy transfer formulation is presented. The result of the analysis is an iterative mutual illumination removal algorithm which is applied to test scenes. Two empirical methods for mutual illumination removal are also derived and demonstrated. Preliminary processing of the three-dimensional data produced by the sensor, exploiting constraints imposed by the device, is examined. The processing yields first and second derivative surface parameters for points in the scene.
-
Ph.D. Thesis
1986
Polygon Optimization Problems (Computational Geometry, Algorithm)
Chang, Jyun-Sheng
Abstract
|
PDF
Title: Polygon Optimization Problems (Computational Geometry, Algorithm)
Candidate: Chang, Jyun-Sheng
Advisor(s): Yap, Chee
Abstract:
The thesis examines polygon optimization problems arising from the stockcutting problem. Two types of problems are considered: the inclusion problems and the enclosure problems. The inclusion (enclosure) problems ask for a maximum polygonal subset (minimum polygonal superset) of a given polygon, satisfying certain conditions. Both the area and perimeter metrics on the polygons can be used as the measure of optimality. Various geometric properties and algorithms for these problems are shown. The main results are: (1) An O(n('7)) time (O(n('6)) time) algorithm for finding a maximum area (perimeter) convex subset. (Only exponential time algorithms existed previously for the problem.) (2) An O(n('2) log n log k) time algorithm for finding a minimum area enclosing convex k-gon. (3) An O(n('2)) time algorithm for finding a minimum perimeter enclosing triangle. (4) An O(nk('4)) time algorithm for finding a minimum enclosing k-gon with a fixed shape.
-
Ph.D. Thesis
1986
Machine Code Optimization
Goss, Clinton Francis
Abstract
|
PDF
Title: Machine Code Optimization
Candidate: Goss, Clinton Francis
Abstract:
This dissertation explores classes of compiler optimization techniques which are applicable late in the compilation process, after all executable code for a program has been linked. We concentrate on techniques which, for various reasons, cannot be applied earlier in the compilation process. We begin by demonstrating the need for optimizations at this level in the UNIX('(REGTM)) programming environment. We then describe a Machine Code Optimizer which improves code in executable task files in that environment. The specific details of certain algorithms are then described: code elimination to remove unreachable code, code distribution to re-order sections of code, operand reduction which converts operands to use more advantageous addressing modes available on the target architecture, and macro compression which collapses common sequences of instructions. We show that the problem of finding optimal solutions for code distribution is NP-Complete and discuss heuristics for practical solutions. We then describe the implementation of a Machine Code Optimizer containing the code elimination, code distribution, and operand reduction algorithms. This optimizer operates in a production environment and incorporates a machine independent architecture representation which allows it to be ported across a large class of machines. We demonstrate the portability of the Machine Code Optimizer to the Motorola MC68000('(REGTM)) and the Digital VAX-11('(REGTM)) instruction sets. Finally, metrics on the improvements obtained across architectures and the optimization techniques are provided along with proposed lines of further research. The methods demonstrate that substantial reductions in code space and more modest improvements in execution speed can be obtained using these techniques.
-
Ph.D. Thesis
1986
Sequential Quadratic Programming Methods Based on Approximating a Projected Hessian Matrix (Updating Method, Quasi-Newton, Nonlinear Constraints)
Gurwitz, Chaya Bleich
Abstract
|
PDF
Title: Sequential Quadratic Programming Methods Based on Approximating a Projected Hessian Matrix (Updating Method, Quasi-Newton, Nonlinear Constraints)
Candidate: Gurwitz, Chaya Bleich
Advisor(s): Overton, Michael
Abstract:
We consider the nonlinear programming problem, namely minimizing a nonlinear function subject to a set of nonlinear equality and inequality constaints. Sequential quadratic programming (SQP) methods are particularly effective for solving problems of this nature. It is assumed that first derivatives of the objective and constraint functions are available, but that second derivatives may be too expensive to compute. Instead, the methods typically update a suitable matrix which approximates second derivative information at each iteration. We are interested in developing SQP methods which maintain an approximation to second derivative information projected onto the tangent space of the constraints. The main motivation for our work is that only the projected matrix enters into the optimality conditions for the nonlinear problem. Updating projected second derivative information reduces the dimension of the matrix to be recurred; we avoid the necessity of introducing an augmenting term which can lead to ill-conditioned matrices; and we are able to make use of standard quasi-Newton updates which maintain hereditary positive definiteness. We discuss four possible formulations of the quadratic programming subproblem and present numerical results which indicate that our methods may be useful in practice.
-
Ph.D. Thesis
1986
Analysis of Cache Memories in Highly Parallel Systems
Mcauliffe, Kevin Patrick
Abstract
|
PDF
Title: Analysis of Cache Memories in Highly Parallel Systems
Candidate: Mcauliffe, Kevin Patrick
Advisor(s): Gottlieb, Allan
Abstract:
Though advances in VLSI technology will soon make it practical to construct parallel processors consisting of thousands of processing elements (PEs) sharing a central memory, the performance of these parallel processors is limited by the high memory access time due to interconnect network latency. This thesis is a study of how the performance of a parallel processor is affected by associating a cache memory with each PE of the system. Cache parameters and policies are varied and the performance of the resulting cache configurations are compared. The cache coherence problem is discussed and a solution that is compatible with the philosophy of parallel systems is adopted. Performance is analyzed by analytic and simulation models. Due to time and space limitations the simulation modeling is done in a hierarchical fashion: a primary level simulates a single cache and a secondary level simulates a parallel machine. The simulators can run in a trace-driven and self-driven mode. The trace data used to drive the simulators was collected by tracing the reference patterns of actual parallel programs. An approximate analytic model is developed that predicts the queue waiting times of various components of a parallel system, enabling the comparison of a water range of cache parameters than is possible with the simulators.
-
Ph.D. Thesis
1986
Synthesizing Realistic Textures by the Composition of Perceptually Motivated Functions (Graphics)
Perlin, Kenneth H.
Abstract
|
PDF
Title: Synthesizing Realistic Textures by the Composition of Perceptually Motivated Functions (Graphics)
Candidate: Perlin, Kenneth H.
Advisor(s): Lowe, David
Abstract:
This research demonstrates a uniform functional composition framework for modeling and synthesizing complex textures. The appearance of a wide range of natural phenomena can be expressed and efficiently synthesized in this framework. Animation of texture is readily incorporated. Emphasis will be on explaining the properties leading to generality, expressivity, and efficiency. A system is described in which an image is approximated by a finite collection of samples, representing neighborhoods in the image. The user designs visual simulations of surface textures by constructing an algorithm that is to be independently computed at each image sample. Primitive functions are provided that allow control within the texture algorithm of visually important texture properties, such as frequency and first order spatial statistics. The user proceeds by building from these functions. Feedback is provided by images indicating the state of any computed quantity over all samples. The system includes primitive functions allowing the manipulation of such visually discriminable qualities as brightness, contrast, coherent discontinuities, orientation, and features possessing restricted ranges of frequency. These are used to build up composite functions allowing the manipulation of more sophisticated visual qualities. The system is applied to build the appearance of many textures such as water, star fields, flame, smoke, marble, clouds, stucco, rock, smoke, and soap films. Major results are twofold. First, it will be shown that a wide range of naturalistic visual textures can be constructed with this approach. Second, a number of particular functions will be demonstrated that encode the common visual elements of disparate visual textures.
-
Ph.D. Thesis
1986
Persistent Data Structures
Sarnak, Neil Ivor
Abstract
|
PDF
Title: Persistent Data Structures
Candidate: Sarnak, Neil Ivor
Advisor(s): Tarjan, Robert
Abstract:
This dissertation introduces the concept of persistence in data structures. Classical algorithms operate on data structures is such a manner that modifications to the structure do not preserve its state as it appeared before the modification. A persistent data structure is one in which multiple versions of the structure as it varies through time are maintained. Data structures that do not maintain the history of states of the structure are called ephemeral. A differentiation between two types of persistence, partial persistence and full persistence, is made. A partially persistent data structure allows the modification only of the most recent version of the structure. This makes partial persistence useful in cases where the history of update operations is required for query purposes but no changes of prior versions are desired. Under certain constraints, any ephemeral data structure may be made persistent without a major blow-up of the space and time complexity measures. Full persistence allows modification of any version of the data structure. This dissertation presents algorithms that support persistent search trees, with applications in computational geometry. In particular, the planar point location problem will be solved using persistent binary search trees with an O(log n) query time and O(n) space. Persistent lists are described, with applications in applicative programming languages. In particular, persistent deques are presented that have constant space overhead per deque operation, while still maintaining O(1) update times. Persistent finger search trees are also presented, with applications in text editing. Persistent finger search trees are implemented with an O(log d) space overhead per update, and an O(log d) time bound, where d is the distance between the finger and the affected position. A general result is shown that allows making arbitrary ephemeral data structures partially persistent with an O(1) space overhead per update operation.
-
Ph.D. Thesis
1986
The Semantics of Shared Variables in Parallel Programming Languages
Shulman, Norman Victor
Abstract
|
PDF
Title: The Semantics of Shared Variables in Parallel Programming Languages
Candidate: Shulman, Norman Victor
Abstract:
Chapter 1 surveys the status of shared variables in parallel programming languages, as well as pointing out the problems inherent in the use of shared variables and the importance of a semantic definition. Our approach to the semantics of shared variables is set forth, and used to highlight the deficiencies of shared variables in Ada. Chapter 2 presents a clear simple informal semantic model of shared variables based on the concepts of atomicity, uniqueness and independence. The model captures the relationships between these concepts so that it can be used to resolve questions regarding packing, mutual exclusion, and local copies of shared variables. Chapter 3 discusses the deficiencies of shared variables in Ada. An informal semantic model of shared variables in Ada is presented in terms of the concepts of atomicity, uniqueness and independence. This informal semantic model serves as the basis for proposing changes to the section of the Ada Reference Manual dealing with shared variables for incorporation in a future revision. Chapter 4 shows how the Ada definition can be modified so that execution of programs such as the on-the-fly garbage collector and the Laplace's equation solver mentioned in Chapter 1 will no longer be qualified as erroneous. New restrictions can be imposed to ensure the independence of operations on shared variables. The informal semantic model also serves as the basis for extending the applicability of the axiomatic techniques of Owicki to a wider class of programs subject to certain optimizations of time and space. Chapter 5 shows that it is possible to relax the restrictions on expressions, and to formulate conditions under which it is safe to keep local copies of shared variables and to pack shared structured objects, while preserving the assignment axiom.
-
Ph.D. Thesis
1986
Recursive Data Types in Setl: Automatic Determination, Data Language Description, and Efficient Implementation (Compilers)
Weiss, Gerald
Abstract
|
PDF
Title: Recursive Data Types in Setl: Automatic Determination, Data Language Description, and Efficient Implementation (Compilers)
Candidate: Weiss, Gerald
Abstract:
Very high level languages are often weakly typed in the sense that different occurrences of a name can be associated with distinct types. The types of many entities are nevertheless determinable from the structure of the program, allowing translators for these languages often to incorporate some sort of typefinding algorithm. Due to problems of algorithmic termination, however, these algorithms have been unable to type structures of a recursive nature such as trees. In this thesis we present a method which detects and uncovers the structure of recursive objects, and discuss possible applications of the method to optimization of code. We examine the run-time type model of SETL and the corresponding data representation sublanguage (DRSL), and present a general critique of the design as well as implementation of the current data representation sublanguage. The objects expressible by the latter are shown to be proper subsets of the universe of types assumable by SETL entities at run-time; we present suggestions for extending the representation sublanguage to allow for complete type specification.