Theses & Reports
Instructions for submitting a technical report or thesis.
You can find technical reports published prior to 1990 archived here.
-
TR1992-610
1992
HESFCN - A Fortran package of Hessian Subroutines for Testing Nonlinear Optimization Software
Averbukh, V.;
Figueroa, S.; Schlick, T.
Abstract
|
PDF
Title: HESFCN - A Fortran package of Hessian Subroutines for Testing Nonlinear Optimization Software
Author(s): Averbukh, V.; Figueroa, S.; Schlick, T.
Abstract:
We report the development of Hessian FORTRAN routines for testing unconstrained nonlinear optimization. An existing package, "Algorithm 566" (J. Mor'e, B. G. Garbow, and K. Hillstrom, ACM Trans. Math. Softw. 7, 14-41 and 136-140, 1981) provides function and gradient subroutines of 18 test functions for multivariate minimization. Our supplementary Hessian segments will enable users to test optimization software that requires second derivative information. Eigenvalue analysis throughout the minimization will also be possible in the goal of better understanding minimization progress by different algorithms and the relation of progress to eigenvalue distribution and condition number.
-
Ph.D. Thesis
1992
A Miniature Space-Variant Active Vision System: Cortex-I
Bederson, Benjamin
Abstract
|
PDF
Title: A Miniature Space-Variant Active Vision System: Cortex-I
Candidate: Bederson, Benjamin
Advisor(s): Schwartz, Eric
Abstract:
We have developed a prototype miniaturized active vision system whose sensor architecture is based on a logarithmically structured space-variant pixel geometry. A space-variant image's resolution changes across the image. Typically, the central part of the image has a very high resolution, and the resolution falls off gradually in the periphery. Our system integrates a miniature CCD-based camera, pan-tilt actuator, controller, general purpose processors and display. Due to the ability of space-variant sensors to cover large work-spaces, yet provide high acuity with an extremely small number of pixels, space-variant active vision system architectures provide the potential for radical reductions in system size and cost. We have realized this by creating an entire system that takes up less than a third of a cubic foot. In this thesis, we describe a prototype space-variant active vision system (Cortex-I) which performs such tasks as tracking moving objects and license plate reading, and functions as a video telephone.
We report on the design and construction of the camera (which is 8x8x8mm), its readout, and a fast mapping algorithm to convert the uniform image to a space-variant image. We introduce a new miniature pan-tilt actuator, the Spherical Pointing Motor (SPM), which is 4x5x6cm. The basic idea behind the SPM is to orient a permanent magnet to the magnetic field induced by three orthogonal coils by applying the appropriate ratio of currents to the coils. Finally, we present results of integrating the system with several applications. Potential application domains for systems of this type include vision systems for mobile robots and robot manipulators, traffic monitoring systems, security and surveillance, telerobotics, and consumer video communications.
The long-range goal of this project is to demonstrate that major new applications of robotics will become feasible when small low-cost machine vision systems can be mass-produced. This notion of `commodity robotics' is expected to parallel the impact of the personal computer, in the sense of opening up new application niches for what has until now been expensive and therefore limited technology.
-
TR1992-602
1992
An $O(m$log$n)$-Time Algorithm for the Maximal Planar Subgraph Problem
Cai, J.;
Han, X.; Tarjan, R.
Abstract
|
PDF
Title: An $O(m$log$n)$-Time Algorithm for the Maximal Planar Subgraph Problem
Author(s): Cai, J.; Han, X.; Tarjan, R.
Abstract:
Based on a new version of Hopcroft and Tarjan's planarity testing algorithm, we develop an O(mlogn)-time algorithm to find a maximal planar subgraph.
-
TR1992-604
1992
More Efficient Bottom-Up Mult-Pattern Matching in Trees
Cai, J.;
Paige, R.; Tarjan, R.
Abstract
|
PDF
Title: More Efficient Bottom-Up Mult-Pattern Matching in Trees
Author(s): Cai, J.; Paige, R.; Tarjan, R.
Abstract:
Pattern matching in trees is fundamental to a variety of programming language systems. However, progress has been slow in satisfying a pressing need for general purpose pattern matching algorithms that are efficient in both time and space. We offer asymptotic improvements in both time and space to Chase's bottom-up algorithm for pattern preprocessing. A preliminary implementation of our algorithm runs ten times faster than Chase's implementation on the hardest problem instances. Our preprocessing algorithm has the advantage of being on-line with respect to pattern additions and deletions. It also adapts to favorable input instances, and on Hoffmann and O'Donnell's class of Simple Patterns, it performs better than their special purpose algorithm tailored to this class. We show how to modify our algorithm using a new decomposition method to obtain a space/time tradeoff. Finally, we trade a log factor in time for a linear space bottom-up pattern matching algorithm that handles a wide subclass of Hoffmann and O'Donnell's Simple Patterns.
- TR1992-609 1992 Multiset Discrimination - A Method for Implementing Programming Language Systems without Hashing Cai, J.; Paige, R. Abstract | PDF
-
TR1992-603
1992
Counting Embeddings of Planar Graphs Using DFS Trees
Cai, Jiazhen
Abstract
|
PDF
Title: Counting Embeddings of Planar Graphs Using DFS Trees
Author(s): Cai, Jiazhen
Abstract:
Previously counting embeddings of planar graphs used P-Q trees and was restricted to biconnected graphs. Although the P-Q tree approach is conceptually simple, its implementation is complicated. In this paper we solve this problem using DFS trees, which are easy to implement. We also give formulas that count the number of embeddings of general planar graphs (not necessarily connected or biconnected) in O (n) arithmetic steps, where n is the number of vertices of the input graph. Finally, our algorithm can be extended to generate all embeddings of a planar graph in linear time with respect to the output.
-
TR1992-595
1992
Multiplicative Schwarz Algorithms for Some Nonsymmetric and Indefinite Problems
Cai, X.-C.;
Widlund, O.
Abstract
|
PDF
Title: Multiplicative Schwarz Algorithms for Some Nonsymmetric and Indefinite Problems
Author(s): Cai, X.-C.; Widlund, O.
Abstract:
The classical Schwarz alternating method has recently been generalized in several directions. This effort has resulted in a number of new powerful domain decomposition methods for elliptic problems, in new insight into multigrid methods and in the development of a very useful framework for the analysis of a variety of iterative methods. Most of this work has focused on positive definite, symmetric problems. In this paper a general framework is developed for multiplicative Schwarz algorithms for nonsymmetric and indefinite problems. Several applications are then discussed including two- and multi-level Schwarz methods and iterative substructuring algorithms. Some new results on additive Schwarz methods are also presented.
-
Ph.D. Thesis
1992
Regular Expressions to DFA's using Compressed NFA's
Chang, Chia-Hsiang
Abstract
|
PDF
Title: Regular Expressions to DFA's using Compressed NFA's
Candidate: Chang, Chia-Hsiang
Advisor(s): Paige, Robert
Abstract:
We show how to turn a regular expression R of length r into an O ( s ) space representation of McNaughton and Yamada's NFA, where s is the number of occurrences of alphabet symbols in R , and s +1is the number of NFA states. The standard adjacency list representation of McNaughton and Yamada's NFA takes up 1 + 2 s + s 2 space in the worst case. The adjacency list representation of the NFA produced by Thompson takes up between 2 r and 6 r space, where r can be arbitrarily larger than s . Given any subset V of states in McNaughton and Yamada's NFA, our representation can be used to compute the set U of states one transition away from the states in V in optimal time O (| V | + | U |). McNaughton and Yamada's NFA requires
$\Theta(|V| \times |U|)$
time in the worst case. Using Thompson's NFA, the equivalent calculation requires$\Theta(r)$
time in the worst case.An implementation of our NFA representation confirms that it takes up an order of magnitude less space than McNaughton and Yamada's machine. An implementation to produce a DFA from our NFA representation by subset construction shows linear and quadratic speedups over subset construction starting from both Thompson's and McNaughton and Yamada's NFA's.
It also shows that the DFA produced from our NFA is as much as one order of magnitude smaller than DFA's constructed from the two other NFA's.
An UNIX egrep compatible software called cgrep based on our NFA representation is implemented. A benchmark shows that cgrep is dramatically faster than both UNIX egrep and GNU e?grep.
Throughout this thesis the importance of syntax is stressed in the design of our algorithms. In particular, we exploit a method of program improvement in which costly repeated calculations can be avoided by establishing and maintaining program invariants. This method of symbolic finite differencing has been used previously by Douglas Smith to derive efficient functional programs.
-
TR1992-620
1992
Backward Analysis for Higher-Order Functions Using Inverse Images
Chuang, T.-R.;
Goldberg, B.
Abstract
|
PDF
Title: Backward Analysis for Higher-Order Functions Using Inverse Images
Author(s): Chuang, T.-R.; Goldberg, B.
Abstract:
We propose a method for performing backward analysis on higher-order functional programming languages based on computing inverse images of functions over abstract domains. This method can be viewed as abstract interpretation done backward. Given an abstract semantics which supports forward analysis, we can transform it into an abstract semantics which performs backward analysis. We show that if the original abstract semantics is correct and computable, then the transformed version of the abstract semantics is also correct and computable.
More specifically, given a forward abstract semantics of a higher-order functional language which is expressed in terms of Scott-closed powerdomains, we derive an backward abstraction semantics which is expressed in terms of Scott-open powerdomains. The derivation is shown to be correct and the relationships between forward analysis and backward analysis is established. We apply this method to the classic strictness analysis in functional languages and obtain promising results. We show that the time complexity of inverse image based backward analysis is not much worse than the forward analysis from which it is derived. We then compare this work with previous works of Wadler and Hughes (1987), Hughes (1988), and Burn (1990), showing that some special restrictions and constructions in previous works have natural interpretation in the Scott-closed/Scott-open powerdomain framework. A brief outline of applying the inverse image method to other backward semantics analysis is also given.
-
TR1992-606
1992
Domain Decomposition Algorithms with Small Overlap
Dryja, M.;
Widlund, O.
Abstract
|
PDF
Title: Domain Decomposition Algorithms with Small Overlap
Author(s): Dryja, M.; Widlund, O.
Abstract:
Numerical experiments have shown that two-level Schwarz methods often perform very well even if the overlap between neighboring subregions is quite small. This is true to an even greater extent for a related algorithm, due to Barry Smith, where a Schwarz algorithm is applied to the reduced linear system of equations that remains after that the variables interior to the subregions have been eliminated. In this paper, a supporting theory is developed.
-
TR1992-615
1992
Some Recent Results on Schwarz Type Domain Decomposition Algorithms
Dryja, M.;
Widlund, O.
Abstract
|
PDF
Title: Some Recent Results on Schwarz Type Domain Decomposition Algorithms
Author(s): Dryja, M.; Widlund, O.
Abstract:
Numerical experiments have shown that two-level Schwarz methods, for the solution of discrete elliptic problems, often perform very well even if the overlap between neighboring subregions is quite small. This is true to an even greater extent for a related algorithm, due to Barry Smith, where a Schwarz algorithm is applied to the reduced linear system of equations that remains after that the variables interior to the subregions have been eliminated. A supporting theory is outlined.
-
Ph.D. Thesis
1992
Complexity Issues in Computational Algebra
Gallo, Giovanni
Abstract
|
PDF
Title: Complexity Issues in Computational Algebra
Candidate: Gallo, Giovanni
Advisor(s): Mishra, Bud
Abstract:
The ideal membership problem for the ring of multivariate polynomials is a central problem in Computational Algebra. Relatively tight computational complexity bounds for this problem are known in the case of polynomials with coefficients in a field. After reviewing these results we give an algorithm, together with an upper bound on its complexity, for the solution of the membership problem in the case of polynomials with integer coefficients. This result is obtained adapting Buchberger's algorithm to the integer case. As an application, we also provide a more general upper bound on the length of strictly ascending chain of ideals in the ring
$Z[x_1,\ldots,x_n]$
.Many applications of Computational Algebra, however, do not require the complete solution of the membership problems and alternative approaches are available. In this thesis we survey the method of the characteristic sets, originally introduced by Ritt in the forties and now widely applied, particularly in Elementary Geometry Theorem Proving. We present optimal algorithms for computing a characteristic set with simple-exponential sequential and polynomial parallel time complexities.
We finally present an attempt to generalize some of the constructive methods of Commutative Algebra to the Algebra of differential polynomials. Rings of differential polynomials do not resemble their purely algebraic counterparts: we prove that there exist non-recursive differential ideals in
$Z\{x\}$
and hence that, in general, no effective method can be devised to solve the membership problem in this case. However special classes of ideals can be algorithmically approached and toward this goal, we generalize the concept of H-basis, first introduced by Macaulay for algebraic ideals, to differential rings. -
TR1992-607
1992
GMRES/CR and Arnoldi/Lanczos as Matrix Approximation Problems
Greenbaum, A.;
Trefethen, L.
Abstract
|
PDF
Title: GMRES/CR and Arnoldi/Lanczos as Matrix Approximation Problems
Author(s): Greenbaum, A.; Trefethen, L.
Abstract:
The GMRES and Arnoldi algorithms, which reduce to the CR and Lanczos algorithms in the symmetric case, both minimize kp(A)bk over polynomials p of degree n. The difference is that p is normalized at z = 0 for GMRES and at z = inf for Arnoldi. Analogous "ideal GMRES" and "ideal Arnoldi" problems are obtained if one removes b from the discussion and minimizes kp(A)k instead. Investigation of these true and ideal approximation problems gives insight into how fast GMRES converges and how the Arnoldi iteration locates eigenvalues.
- TR1992-608 1992 Matrices that Generate the Same Krylov Residual Spaces Greenbaum, A.; Strakos, Z. Abstract | PDF
-
Ph.D. Thesis
1992
Typing Higher-Order Functions with Dynamic Dispatching
Hsieh, Chih-Hung
Abstract
|
PDF
Title: Typing Higher-Order Functions with Dynamic Dispatching
Candidate: Hsieh, Chih-Hung
Advisor(s): Harrison, Malcolm C.
Abstract:
We design new type expressions and algorithms to classify and check object types in higher-order programming. Our computation model is imperative and strongly typed. It has dynamic-dispatched functions, higher-order bounded polymorphic functions, record and function subtyping, parameterized types, both named and structural types, free-union types, existential union types, poly-typed variables, poly-typed expressions, and heterogeneous collections.
A prototype of a mini-language with the above features is implemented in Prolog with a type checking system. A small but powerful set of typing structures and operations is identified. The type checking rules are formally defined. A new technique is developed for translating recursive type relations into cyclic AND/OR graphs. Efficient algorithms are designed for resolving generalized AND/OR graphs with constraints on valid cycles.
Using elegant syntax the new type system describes more general and precise type relations than any other type systems we have known. The new technique for translating type relations into AND/OR graphs provides a new direction for implementing a higher-order polymorphic type system, which is not available in unification-based type systems. The AND/OR graph models are general enough to represent recursive relations, and their applications are not solely limited to type-checking. Our AND/OR graph resolution algorithms find the optimal solutions. They are theoretically proved efficient and are shown practical in our implementation.
-
Ph.D. Thesis
1992
Computer Simulation of Cortical Polymaps
Landau, Pierre
Abstract
|
PDF
Title: Computer Simulation of Cortical Polymaps
Candidate: Landau, Pierre
Advisor(s): Schwartz, Eric
Abstract:
Neo-cortical sensory areas of the vertebrate brain are organized in terms of topographic maps of peripheral sense-organs. Cortical topography has been generally modeled in terms of a continuous map of a peripheral sensory surface onto a cortical surface. However, the details of cortical architecture do not conform to this concept. Most, if not all, cortical areas consist of an interlaced structure containing multiple topographic maps of distinct classes of neural input. The term ``polymap'' is used to refer to a cortical area which consists of more than one system, interlaced in a globally topographic, but locally columnar fashion. The best known example of a cortical polymap is provided by the ocular dominance column system in layer IV of primate striate cortex, but the puff/extra-puff and orientation systems of surrounding layers also illustrate this concept, as do the thick-thin-interstripe columns of V-2, and the direction columns of MT. Since polymap architecture seems to be a common architectural pattern in the neo-cortex, this work addresses the computational modeling of polymap systems, with the expectation that such modeling will lead to a better understanding of the underlying biology. An algorithm is presented, based on the computational geometry constructs of Generalized Voronoi Polygon and Medial Axis, which provides a general method for simulating polymap systems. It also adds a powerful technique to the repertoire of Digital Image Warping. The algorithm is illustrated using the ocular dominance column and orientation column systems of V-1. In addition, a mechanism is proposed and demonstrated to account for the spatial registration of the ocular dominance and orientation column systems. Computer simulations of the activity evoked by binocular stimuli, as they would appear at the level of layers III and IV in V-1, are shown, and compared to results from recent experiments. Methods of generalizing these techniques to other common polymap cortical areas are outlined.
-
Ph.D. Thesis
1992
Polymorphic Type Inference and Abstract Data Types
Laufer, Konstantin
Abstract
|
PDF
Title: Polymorphic Type Inference and Abstract Data Types
Candidate: Laufer, Konstantin
Advisor(s): Goldberg, Benjamin; Odersky, Martin (Yale)
Abstract:
Many statically-typed programming languages provide an abstract data type construct, such as the package in Ada, the cluster in CLU, and the module in Modula2. However, in most of these languages, instances of abstract data types are not first-class values. Thus they cannot be assigned to a variable, passed as a function parameter or returned as a function result.
The higher-order functional language ML has a strong and static type system with parametric polymorphism. In addition, ML provides type reconstruction and consequently does not require type declarations for identifiers. Although the ML module system supports abstract data types, their instances cannot be used as first-class values for type-theoretic reasons.
In this dissertation, we describe a family of extensions of ML. While retaining ML's static type discipline, type reconstruction, and most of its syntax, we add significant expressive power to the language by incorporating first-class abstract types as an extension of ML's free algebraic datatypes. In particular, we are now able to express
- multiple implementations of a given abstract type,
- heterogeneous aggregates of different implementations of the same abstract type, and
- dynamic dispatching of operations with respect to the implementation type.
Following Mitchell and Plotkin, we formalize abstract types in terms of existentially quantified types. We prove that our type system is semantically sound with respect to a standard denotational semantics.
We then present an extension of Haskell, a non-strict functional language that uses type classes to capture systematic overloading. This language results from incorporating existentially quantified types into Haskell and gives us first-class abstract types with type classes as their interfaces. We can now express heterogeneous structures over type classes. The language is statically typed and offers comparable flexibility to object-oriented languages. Its semantics is defined through a type-preserving translation to a modified version of our ML extension.
We have implemented a prototype of an interpreter for our language, including the type reconstruction algorithm, in Standard ML.
-
Ph.D. Thesis
1992
A sublanguage based medical language processing system for German
Oliver, Neil
Abstract
|
PDF
Title: A sublanguage based medical language processing system for German
Candidate: Oliver, Neil
Advisor(s): Sager, Naomi
Abstract:
The major accomplishments reported in this thesis are:
- The development of a computer grammar for a nontrivial sublanguage of German. This grammar, using the LSP (Linguistic String Processor) grammar formalism, solves a number of parsing problems arising in free word order languages such as German.
- The development of an LSP-based information formatting system that obtains semantic representations of texts in a medical sublanguage of German.
- The confirmation of the sublanguage hypothesis (explained below).
In LSP grammar theory, sentences in a language are derived from a collection of basic sentence types. The basic sentence types are described in terms of the major syntactic classes (e.g., noun, verb, adjective) of the language. Sentences are derived from these basic sentences by the insertion of optional structures called adjuncts, by conjoining, and by substituting words in the major classes. Insertion, conjoining, and substitution are constrained by co-occurrence restrictions between elements in the derived syntactic structures. The restrictions subcategorize the major word classes into subclasses that may co-occur in sentences according to the co-occurrence restrictions.
The sublanguage hypothesis elaborates LSP grammar theory in the following way. In a particular domain of discourse, the subcategorization of the major word classes reflects the underlying semantics of the domain. The basic sentence types of the language, represented by sublanguage subclasses instead of major word classes, can function as data structures (called information formats ) representing the information of the domain.
The LSP Medical Language Processor (LSP/MLP) is an information retrieval/information extraction system based on sublanguage and information formatting. It processes sentences in the English sublanguage of clinical reporting into information formats, which are in turn are converted into database update records for a relational database. The information formats are derived from sublanguage co-occurrence information obtained from a corpus of discharge summaries.
The German information formatting system implemented in this work processes German Arztbriefe (doctor letters) of cancer surgery patients into information formats. It confirms the sublanguage hypothesis because it re-uses the sublanguage information (co-occurrence information and formats) of the English LSP/MLP system in an equivalent sublanguage, showing that the sublanguage information reflects the semantics of the domain.
-
Ph.D. Thesis
1992
Image Processing, Pattern Recognition and Attentional Algorithms in a Space-Variant Active Vision System
Ong, Ping-Wen
Abstract
|
PDF
Title: Image Processing, Pattern Recognition and Attentional Algorithms in a Space-Variant Active Vision System
Candidate: Ong, Ping-Wen
Advisor(s): Schwartz, Eric
Abstract:
A space-variant sensor motivated by human vision system has highest resolution at the center with rapidly decreasing resolution toward the peripheral area. It has the advantages of a wide visual field and, at the same time, high central resolution. The dramatic reduction of pixel number in this kind of sensor makes it possible to build a real-time vision system using only moderate computational resources. On the other hand, the space-variant image has different layout compared to a raster image. The neighbor relationships change from pixel to pixel. We need to device special method to solve this neighborhood problem.
We use a connectivity graph to represent neighbor relations between pixels in a space-variant image. We can use it to define operators for edge detection, smoothing, etc. We use a two-level pyramid based on the connectivity graph to perform local thresholding for segmentation. The translation, rotation and scaling graph are three extensions of the connectivity graph which are used to translate, rotate and scale space-variant images. We can use these graphs to perform scale and rotation independent template matching.
We successfully apply several feature designs for OCR in the space-variant domain. They include: Characteristic-Loci, Partition, Heat-Signature, and Projection. All of them are translation and scale invariant. We also have two rotation invariant methods based on Partion and Projection methods.
Since space-variant sensor has higher resolution at the center, the recognition result is more reliable if we point the sensor close to the candidate object. Therefore, if we want to recognize any single character, the center of this character is the best place for pointing the sensor. But for recognizing adjacent characters, except for well separated ones, we need to point the sensor to the place where we can separate these characters.
Based on this reliability analysis, we devised four attentional rules and an algorithm for moving sensors to recognize character strings in static natural scenes.
Finally, we describe the algorithms for reading characters from the license plate on a moving vehicle. It includes stages for traffic zone finding, moving car finding, license plate finding, license plate tracking, and character reading.
-
Ph.D. Thesis
1992
On Compiling Regular Loops for Efficient Parallel Execution
Ouyang, Pei
Abstract
|
PDF
Title: On Compiling Regular Loops for Efficient Parallel Execution
Candidate: Ouyang, Pei
Advisor(s): Kedem, Zvi; Palem, Krishna
Abstract:
In this thesis, we study the problem of mapping regular loops onto multiprocessors. We develop mapping schemes that yield very efficient executions of regular loops on shared and distributed memory architectures. We also develop novel analysis techniques, using which we argue about the efficiency of these resulting executions. The quality of the execution of these regular loops in the distributed memory setting, relies heavily on implementing cyclic shifts efficiently. Effectively, cyclic shifts are used to communicate results between individual processors, to which different interdependent iterations are assigned. Therefore, in order to achieve efficient executions of regular loops on distributed memory architectures, we also develop and analyze algorithms for solving the cyclic shift problem. In order to analyze the executions of regular loops that result from any specific mapping, we need to characterize the important parameters that determine its efficiency. We formally characterize a basic set of such parameters. These parameters facilitate the analysis of the memory and the processor requirements of a given execution, as well as its running time . Using these parameters, we analyze a greedy execution scheme, in the shared memory model. For example, we can determine the limit on the number of processors beyond which no speedup can be attained by the greedy method, for regular loops. The greedy scheme is of interest because it exploits the maximal possible parallelism in a natural way.
We then address the mapping scheme of regular loops onto distributed memory machines. Unfortunately, we show that the problem of finding an optimal mapping is computationally intractable in this case. In order to provide schemes that can be actually applied to regular loops at compile-time, we relax the requirement that the resulting executions be optimum. Instead, we design a heuristic mapping algorithm and validate it through experiments. This heuristic mapping scheme relies heavily on the use of efficient algorithms for realizing cyclic shifts. Therefore, we also study the problem of realizing cyclic shifts on hypercube architectures.
-
TR1992-597
1992
Semantic Analyses for Storage Management Optimizations in Functional Language Implementations
Park, G.
Abstract
|
PDF
Title: Semantic Analyses for Storage Management Optimizations in Functional Language Implementations
Author(s): Park, G.
Abstract:
One of the major overheads in implementing functional languages is the storage management overhead due to dynamic allocation and automatic reclamation of indefinite-extent storage. This dissertation investigates the problems of statically inferring lifetime information about dynamically-allocated objects in higher-order polymorphic functional languages, both strict and non-strict, and of applying that information to reduce the storage management overhead.
We have developed a set of compile-time semantic analyses for a higher-order, monomorphic, strict functional language based on denotational semantics and abstract interpretation. They are 1) escape analysis, which provides information about the relative lifetimes of objects such as arguments and local objects defined within a function with respect to an activation of the function call, 2) refined escape analysis which, as a refinement of escape analysis, provides information about the lifetimes of components of aggregate structures, and 3) reference escape analysis which provides information about the relative lifetimes of references created within a function with respect to an activation of the function.
We also have developed a compile-time semantic analysis called order-of-demand analysis for higher-order, monomorphic, non-strict functional languages, which provides information about the order in which the values of bound variables are demanded and thus allows one to compute a range of information including strictness, evaluation-order, and evaluationstatus information.
Using the notion of polymorphic invariance, we describe a method for analyzing a polymorphic language by using the analyses for a monomorphic language. We then extend those analyses for a strict language to a non-strict language using non-strict program transformation and evaluation-status information.
Based on statically inferred escape information, we propose a combination of storage management optimization techniques including stack allocation, explicit reclamation, in-place reuse, reference counting elimination, block allocation/reclamation, and improving generational garbage collection.
-
TR1992-616
1992
Domain Decomposition Algorithms for the P-Version Finite Element Method for Elliptic Problems
Pavarino, L.
Abstract
|
PDF
Title: Domain Decomposition Algorithms for the P-Version Finite Element Method for Elliptic Problems
Author(s): Pavarino, L.
Abstract:
Domain decomposition algorithms based on the Schwarz framework were originally proposed for the h-version finite element method for elliptic problems. In this thesis, we study some Schwarz algorithms for the p-version finite element method, in which increased accuracy is achieved by increasing the degree p of the elements while the mesh is fixed. These iterative algorithms, often of conjugate gradient type, are both parallel and scalable, and therefore very well suited for massively parallel computing.
We consider linear, scalar, self adjoint, second order elliptic problems and quadrilateral elements in the finite element discretization. For a class of overlapping methods, we prove a constant bound, independent of the degree p, the mesh size H and the number of elements N , for the condition number of the iteration operator. This optimal result holds in two and three dimensions for additive and multiplicative schemes, as well as variants on the interface.
We consider then local refinement for the same class of overlapping methods in two dimensions. Optimal bounds are obtained under certain hypothesis on the choice of refinement points, while in general almost optimal bounds with logarithmic growth in p are obtained. In the analysis of these local refinement methods, we prove some results of independent interest, such as a polynomial discrete Sobolev inequality and a bounded decomposition of discrete harmonic polynomials.
Iterative substructuring methods in two dimensions are also considered. We use the additive Schwarz framework to prove almost optimal bounds as in the h-version finite element method.
Results of numerical experiments, confirming the theoretical results, are conducted in two dimensions for model problems.
-
TR1992-614
1992
Some Schwarz Algorithms for the P-Version Finite Element Method
Pavarino, L.
Abstract
|
PDF
Title: Some Schwarz Algorithms for the P-Version Finite Element Method
Author(s): Pavarino, L.
Abstract:
Domain decomposition methods based on the Schwarz framework were originally proposed for the h-version finite element method for elliptic problems. In this paper, we consider instead the p-version, in which increased accuracy is achieved by increasing the degree of the elements while the mesh is fixed. We consider linear, scalar, self adjoint, second order elliptic problems and quadrilateral elements in the finite element discretization. For a class of overlapping additive Schwarz methods, we prove a constant bound, independent of the degree p and the number of elements N , for the condition number of the iteration operator. This optimal result holds in two and three dimensions for additive and multiplicative schemes, as well as variants on the interface. We then study local refinement for the same class of overlapping methods in two dimensions. A constant bound holds under certain hypotheses on the refinement region, while in general an almost optimal bound with logarithmic growth in p is obtained.
-
Ph.D. Thesis
1992
Japanese/English Machine Translation Using Sublanguage Patterns and Reversible Grammars
Peng, Ping
Abstract
|
PDF
Title: Japanese/English Machine Translation Using Sublanguage Patterns and Reversible Grammars
Candidate: Peng, Ping
Abstract:
For this thesis, a Japanese/English machine translation system with reversible components has been designed and implemented in PROLOG. Sublanguage co-occurrence patterns have been used to address the problems of lexical and structural selection in the transfer between the internal representations of a pair of natural languages. The system has been tested translating Japanese into English in the domain of programming language manuals. The evaluation of the test outputs provides some assessment of the utility of the sublanguage approach as a method for the development and refinement of a machine translation system. The thesis also explores the roles that a reversible grammar would play in sharing linguistic knowledge between parsing and generation.
The system has been developed with the goal of using sublanguage word co-occurrence patterns to simplify the description of syntactic/semantic knowledge needed in both the transfer rules and the analysis of the source language. In particular, sublanguage co-occurrence patterns are introduced to provide semantic constraints and ellipsis recovery in parsing Japanese.
This thesis introduces a right-to-left parsing scheme for Japanese. The idea for the right-to-left parsing algorithm evolved from the desire to produce partial syntactic analyses of Japanese in a more deterministic manner than was achieved by conventional left-to-right parsing schemes. The algorithm makes efficient use of sublanguage co-occurrence patterns as semantic knowledge to help disambiguate Japanese parses. The enforcement of syntactic and semantic constraints is tightly interwoven during the course of parsing. The performance in parsing Japanese has thereby been significantly enhanced.
A procedure has been implemented for translating a Definite Clause Grammar dually into a PROLOG parser and PROLOG generator, so that one grammar can be used for parsing and generation. In current natural language processing systems, separate grammars are used for parsing and generation. However, there has long been an interest in designing a single grammar for both parsing and synthesis for reasons of efficiency and integrity, as well as linguistic elegance and perspicuity. As part of the current implementation, a strategy has been developed for creating efficient grammars for both parsing and generation using a goal reordering technique within the logic programming framework.
-
Ph.D. Thesis
1992
The Analysis and Generation of Tests for Programming Language Translators
Rennels, Deborah
Abstract
|
PDF
Title: The Analysis and Generation of Tests for Programming Language Translators
Candidate: Rennels, Deborah
Advisor(s): Schonberg, Edmond
Abstract:
This thesis addresses the automation of two aspects of compiler validation testing: semantic analysis of existing test programs, and construction of new test programs. Semantic analysis is required during test modification and maintenance, and also when evaluating the language coverage attained by the test suite. In the current state of practice, both the semantic analysis and the construction of new test programs are extremely labor-intensive tasks; both, however, are amenable to automation. We describe two systems; one, which we have implemented, involves test case analysis and feature identification. The other is a proposed system for automatic generation of tests from test specifications. We tested our methods on the largest and most comprehensive compiler validation project to date- the Ada Compiler Validation Capability (ACVC), a large collection of Ada test programs used to verify that compilers conform to the Ada language standard.
We first describe the Ada Features Identification System (AFIS), a system which automates test program analysis. AFIS provides three different methods for identifying Ada language features in test programs, ranging from elementary syntactic items to complex context-sensitive combinations of semantic features. Semantic feature copmbinations are specified by writing program templates in a pattern language which is an extension of Ada, and pattern-matching these templates against test programs.
In the second part of this thesis we define a language to facilitate the specification of Ada compiler test objectives, and the design of a system that uses these specifications to automatically generate valid Ada test programs. The language allows a test developer to write a specification that embodies the testing goal of a given objective, without including all type and expression information required in a complete test program. These details are supplied automatically by the generator system. We show, by numerous examples taken from the Ada Implementors Guide (the design document for the Ada validation suite), how Ada test objectives can be specified in this language. The focus of our examples is constraint violation checking, which is an important component of Ada's strong typing system, and also a basic organizing principle of the ACVC tests.
-
Ph.D. Thesis
1992
Massively Parallel Bayesian Object Recognition
Rigoutsos, Isidore
Abstract
|
PDF
Title: Massively Parallel Bayesian Object Recognition
Candidate: Rigoutsos, Isidore
Advisor(s): Hummel, Robert
Abstract:
The problem of model-based object recognition is a fundamental one in the field of computer vision, and represents a promising direction for practical applications. In this talk we will describe the design, analysis, implementation and testing of a model-based object recognition system.
In the first part of the talk, we will discuss two parallel algorithms for performing geometric hashing. The first algorithm regards geometric hashing as a connectionist algorithm with information flowing via patterns of communication, and is designed for an SIMD hypercube-based machine. The second algorithm is more general, and treats the parallel architecture as a source of ``intelligent memory;'' the algorithm achieves parallelism through broadcast facilities from the parallel machine's host. A number of enhancements to the geometric hashing method, such as hash table equalization, the use of hash table symmetries, and hash table foldings will also be presented. These enhancements were developed specifically for the parallel algorithms, and lead to substantial performance improvements.
In the second part of the talk, we will examine the performance of geometric hashing methods in the presence of noise. The quantization of the invariants can result in a non-graceful degradation of the performance. We will present precise formulas as well as first-order approximations describing the dependency of the computed invariants on Gaussian positional error, for the similarity and affine transformation cases. Knowledge of this dependency allows the incorporation of an error model in the geometric hashing framework and subsequently leads to improved performance. A counter-intuitive result regarding the solutions of certain linear systems will also be derived as a corollary of this analysis.
In the final part of the talk, we will present an interpretation of geometric hashing that allows the algorithm to be viewed as a Bayesian approach to model-based object recognition. This interpretation, which is a new form of Bayesian-based model matching, leads to well-justified formulas, and gives a precise weighted-voting method for the evidence-gathering phase of geometric hashing. These formulas replace traditional heuristically-derived methods for performing weighted voting, and also provide a precise method for evaluating uncertainty.
A prototype object recognition system using these ideas has been implemented on a CM-2 Connection Machine. The system is scalable and can recognize aircraft and automobile models subjected to 2D rotation, translation, and scale changes in real-world digital imagery. This is the first system of its kind that is scalable, uses large databases, can handle noisy input data, works rapidly on an existing parallel architecture, and exhibits excellent performance with real-world, natural scenes.
-
Ph.D. Thesis
1992
Control of a Dexterous Robot Hand: Theory, Implementation, and Experiments
Silver, Naomi
Abstract
|
PDF
Title: Control of a Dexterous Robot Hand: Theory, Implementation, and Experiments
Candidate: Silver, Naomi
Advisor(s): Mishra, Bud
Abstract:
Advanced robotic systems, such as multi-fingered hands are becoming more complex, and, as yet, many of the basic questions involved remain unanswered. What control law should we use? What constitutes a good control law? How should we describe motions? What constitutes a broad, yet efficient description of motions for a grasped object?
In addition to the complexity, robotic systems frequently undergo upgrades, and it is therefore necessary to design the system in an unorganized manner. This includes such things as hierarchical software, system independent descriptions of motion, system independent control laws.
In this thesis, we address these issues for a less complex system. We have focused our attention on describing motions for objects being grasped by a multi-fingered hand. We present a formulation for motion primitives, which allow object manipulation and require limited parameter specification. We have attempted to find a control law which will perform well under adverse conditions. We have built the system and tested it on the NYU Four Finger Manipulator which is a two dimensional hand. Even for this simplified problem, there remains a large degree of complexity and there are as yet no definitive solutions to these problems.
-
Ph.D. Thesis
1992
Executable Operational Semantics of Programming Languages
Siritzky, Brian
Abstract
|
PDF
Title: Executable Operational Semantics of Programming Languages
Candidate: Siritzky, Brian
Advisor(s): Dewar, Robert
Abstract:
Since the inception of computer languages there have been attempts to define programming languages formally. Several markedly different methodologies have been proposed to solve this problem. This thesis argues for Executable Operational Semantics (EOS) as a methodology for formal definition which has many fundamental advantages. The EOS methodology has, however, been broadly criticized. We show that the major objections against EOS are unfounded, and that executability is suitable and useful for many applications of formal definitions. The primary criticisms of EOS definitions- that they are hardware and architecture specific, that they are unable to describe concurrency and non-determinism, and that they overspecify implementation details-are countered by the demonstration that Ada/Ed, an executable definition of Ada developed at New York University, can avoid or overcome each problem. A description of the implementation of Ada's real arithmetic and representation specifications reveals hardware and architectural independence in an executable definition. Ada/Ed's model of Ada tasking demonstrates that concurrency can be defined within an executable framework, and we argue that an executable definition can describe all non-deterministic aspects of Ada. The problems of overspecificity can be alleviated by the appropriate choice of metalanguage and software techniques, and by suitable parameterization of the formal definition. Finally, we describe some general advantages of executable definitions. We present worked examples of questions to a formal definition of Ada, and establish that requiring a definition to be executable enhances rather than degrades its usability and credibility.
-
Ph.D. Thesis
1992
Non-Correcting Error Recovery For LR Parsers
Snyder, Kirk
Abstract
|
PDF
Title: Non-Correcting Error Recovery For LR Parsers
Candidate: Snyder, Kirk
Advisor(s): Schwartz, Jacob T.
Abstract:
In recent years much effort has been devoted to the automatic generation of parsers, with considerable success. The error-handling mechanisms of these parsers are still not completely satisfactory, however. Currently available techniques are either too slow for practical production compilers, or they leave open the possibility of many spurious diagnostic messages. This thesis presents a parsing technique designed to minimize the frequency of spurious or misleading diagnostic messages emitted by the compiler, without the efficiency cost of similarly robust parsers. The technique parses program text following a syntax error as a `suffix' in the programming language, reporting errors in invalid suffixes. The system achieves its high efficiency by accepting a superset of the suffixes of the language being parsed, but a sufficiently small superset that very few errors are undetected. The technique described has been fully implemented, and a number of experiments on typical syntax errors in various programming languages are presented. We describe our parsing system in detail and assess its strengths and weaknesses relative to other parsing systems.
-
Ph.D. Thesis
1992
Global Methods for Image Motion Analysis
Sundareswaran, V.
Abstract
|
PDF
Title: Global Methods for Image Motion Analysis
Candidate: Sundareswaran, V.
Advisor(s): Hummel, Robert
Abstract:
Processing motion information is an important problem in building automated vision systems. A moving sensor can obtain knowledge about the environmental layout, its own motion, and motion of objects in the scene by processing the temporal information in the imagery. We provide algorithms that can determine self-motion (or egomotion) by observing a sequence of images produced by a moving sensor in a rigid, stationary environment. The algorithms make use of optical flow information extracted from the sequence, and unlike most alternative methods, are global and robust to inaccuracies in the flow data.
Two algorithms are presented. Both algorithms assume that the first stage of visual motion analysis, the computation of an image vector flow field that describes the instantaneous motion of individual points, has been solved.
The first algorithm, the flow circulation algorithm, determines the rotational parameters using the curl of the flow field, which under many conditions is approximately a linear function. The coefficients of the linear function, which may be determined by simple regression, are the desired rotational parameters. Circulation values, defined to be contour integrals of the vector field on the image plane, may be used in place of curl values, resulting in robustness. The second algorithm determines the translational parameters of the motion. The inner product of the image vector flow field and a certain circular vector field gives rise to a scalar function that is of a particular quadratic polynomial form when the center of the circlular field is chosen appropriately. This correct choice of the center is related to the translational parameters and can be found by projecting the inner product function onto suitable subspaces determined by the quadratic polynomial form. Three different methods, of increasing complexity and accuracy, are developed. A fourth, fast but approximate method is also presented.
The algorithms are described, analyzed and experimental results are shown. The thesis contains mathematical observations that provide insight into the problem of motion analysis, and experimental observations that demonstrate the applicability of the algorithms.
-
TR1992-621
1992
Statistical Approach to Affine Invariant Matching with Line Features
Tsai, F.
Abstract
|
PDF
Title: Statistical Approach to Affine Invariant Matching with Line Features
Author(s): Tsai, F.
Abstract:
One of the most important goals of computer vision research is object recognition. Currently, most object recognition algorithms assume reliable quality of image segmentation, which in practice is often not the case. This report examines the combination of the Hough Transform with a variation of Geometric Hashing as a technique for model-based object recognition in seriously degraded single intensity images.
There is recently much focus on the performance analysis of geometric hashing. However, to our knowledge, all of them are focusing on applying the paradigm to point features and show that the technique is sensitive to noise. There is as yet no exploration of line features. In this report, we use lines as the primitive features to compute the geometric invariants for fast indexing into the geometric hash table containing the pre-processed model information. In addition, we analytically determine the effect of perturbations of line parameters on the computed invariant for the case where models are allowed to undergo affine transformation.
We have implemented the system with a series of experiments on polygonal objects, which are modeled by lines. It shows that the technique is noise resistant and suitable in an environment containing many occlusions.