Title: The Cell Programming Language
Author(s): Agarwal, P.
We describe the Cell Programming Language (CPL), which we have designed to write programs that mimic the life of a biological cell. The aim is to study the complex interactions between cells which lead to diverse shapes of cellular aggregates.
Each cell is treated as a two-dimensional homogeneous polygon with a specific area. A cell goes through a series of states in its lifetime. In each state, one can specify the cell's growth rate; information about cell division and cell differentiation; the chemical constituents of the cell and their interactions; and cell motion. This behavior may be conditional based on the cell's own status (chemical concentrations and size) or on its neighborhood (the type of cells surrounding it, the contact lengths with each of them, their areas, the directions to these cells, and their chemical concentrations). The language is explored by modeling cellular sorting in vitro, and aggregation in the Dictyostelium discoidea (cellular slime mold).
Title: Cell-based Computer Models in Developmental Biology
Candidate: Agarwal, Pankaj
Advisor(s): Schwartz, Jacob T.
In developmental biology, modeling and simulation play an important role in understanding cellular behavior. We suggest a simple language, the Cell Programming Language (CPL), to write computer programs to describe this behavior. Using these programs, it is possible to simulate and visualize cell behavior.
A genome is the program for the development of an organism. The genome, in conjunction with the environment, determines the behavior of each cell of the organism. The program for each cell (written in CPL) plays the role of its genome. The program for an individual cell consists of a set of states. In each state, rules are specified which determine the cell properties (i.e. shape, motility, concentrations of various molecular species, etc.). Different states of the same cell signify different phases in the cell's life. Each cell has a tissue type associated with it. Cells of the same tissue type execute the same CPL program.
We use the discrete time simulation model. At every time step, each cell executes all the instructions in its present state sequentially. All cells are assumed to be executing in parallel, with synchronization performed after every time step.
The cells are two-dimensional. Each cell has a physical location comprising a collection of discrete connected points. This physical presence imparts to the cells the attributes of area, perimeter, and neighbors (other cells). The neighbor attribute forms the basis for all intercellular communication.
The language contains features for specifying:
We have employed CPL to model the following: aggregation in cellular slime mold in response to a chemotactic agent; the formation of skeletal elements in the vertebrate limb; and cellular segregation due to differential adhesion.
Title: A Language for Semantic Analysis
Author(s): Cai, J.
Semantic analysis is important for compilers. In the APTS program transformation system, semantics is specified by rules in the language RSL. The semantic rules are interpreted by APTS to generate the semantic information of the program, which is then used by the rewriting engine for program translation. This approach has proved to be convenient and powerful in our construction of a SETL-to-C compiler. In this paper, we discuss the features, applications, implementation strategy, and performance of RSL.
Title: Applications of Convexity in Computational Geometry
Candidate: Capoyleas, Vasilis
Advisor(s): Pach, Janos; Pollack, Richard
We present seven results in computational geometry. The concept of convexity plays a vital role in all seven of the results; either as a tool in the proof method or as a means of giving a formal definition.
The topics considered are:
$\epsilon$ -nets: We provide strong upper bounds for the size of the smallest weak [IMAGE ] -net of a set of points, in two basic cases.
Geometric Clusterings: We provide the first polynomial algorithm to find an optimal clustering of a set of points in the plane. The optimality criteria are based on the diameter and radius of the clusters.
The Hadwiger-Kneser-Thue Poulsen conjecture: This famous 40 year old conjecture states that the area of the union of a set of disks is diminished if the disks are pushed together. We provide two partial results to this conjecture.
Grasping: We consider grasping of polygonal objects by a pair of parallel jaws. We define a model and prove that a fairly large class of polygons can be grasped under this model.
Graph drawing and crossing numbers: We consider the problem of estimating the maximum number of edges for graphs that satisfy some sort of a relaxed planarity condition. We provide exact bounds for an important special case.
Title: New Techniques for the Analysis and Implementation of Functional Programs
Candidate: Chuang, Tyng-Ruey
Advisor(s): Goldberg, Benjamin
Functional programming languages provide programmers with clean semantics and side-effect free computation, which make easier the tasks of designing programs and reasoning about them. Efficient implementations of purely functional programs, however, can pose certain challenges. Our purpose in this dissertation is to develop new techniques for the efficient analysis and implementation of functional programs.
Our first goal is to investigate a syntactic approach, contrary to the usual semantic approaches, of finding the least fixed points of higher-order functions over finite domains. The second objective is to develop implementation techniques for aggregate data structures for functional programs such that accesses to aggregates are both efficient and side-effect free.
Finding the least fixed point of a monotonic function over a finite domain is an essential task when analyzing a functional program in the framework of abstract interpretation. Previous methods for least fixed point finding have primarily used semantic approaches, which often traverse large portions of the semantic domain and may be very inefficient even for simple programs. We propose a syntactic method based on an augmented simply typed lambda calculus. It is shown that, for finite domains, the syntactic method is both sound and complete with respect to the semantics. Moreover, we demonstrate that the proposed syntactic method can be quite effective in cases where the usual semantic method is very inefficient.
Efficient implementations of aggregate data structures for functional programs has been an active research topic. The problem arises because once an aggregate is updated, both the old version and newly updated copy must be preserved to maintain the side-effect free semantics of functional languages. We modify the shallow binding scheme of Baker to implement functional arrays for efficient incremental updates and voluminous reads. The scheme, however, uses side-effects and cannot be implemented in purely functional languages themselves. We then investigate the possibility of implementing efficient aggregates without using side-effects, and show that real-time deques can be implemented in a purely functional way. We describe several interesting applications of this technique.
Title: Knowledge Preconditions for Plans
Author(s): Davis, Ernest
For an agent to be able to rely on a plan, he must know both that he is physically capable of carrying out the physical actions involved, and that he knows enough to carry out the plan. In this talk, we advance and discuss new definitions of "knowing enough to carry out a plan", for the case of a single agent carrying out a sequence of primitive actions one at a time. We consider both determinate and indeterminate plans.
We show how these definition can be expressed in a formal logic, using a situation calculus model of time and a possible worlds model of knowledge. The definitions strictly subsume previous theories for the single-agent case without concurrent actions.
We illustrate the power of the definition by showing that it supports results of the following kinds:
Title: Schwarz Analysis of Iterative Substructuring Algorithms for Elliptic Problems in Three Dimensions
Author(s): Dryja, M.; Smith, B.; Widlund, O.
638 SCHWARZ ANALYSIS OF ITERATIVE SUBSTRUCTURING ALGORITHMS FOR ELLIPTIC PROBLEMS IN THREE DIMENSIONS M. Dryja, B. Smith, O. Widlund, May 1993 Domain decomposition methods provide powerful preconditioners for the iterative solution of the large systems of algebraic equations that arise in finite element or finite difference approximations of partial differential equations. The preconditioners are constructed from exact or approximate solvers for the same partial differential equation restricted to a set of subregions into which the given region has been divided. In addition, the preconditioner is often augmented by a coarse, second level approximation, that provides additional, global exchange of information, and which can enhance the rate of convergence considerably. The iterative substructuring methods, based on decompositions of the region into non-overlapping subregions, form one of the main families of such Many domain decomposition algorithms can conveniently be described and analyzed as Schwarz methods. These algorithms are fully defined in terms of a set of subspaces and auxiliary bilinear forms. A general theoretical framework has previously been developed and, in this paper, these techniques are used in an analysis of iterative substructuring methods for elliptic problems in three dimensions. A special emphasis is placed on the difficult problem of designing good coarse models and obtaining robust methods for which the rate of convergence is insensitive to large variations in the coefficients of the differential equation. Domain decomposition algorithms can conveniently be built from modules, which represent local and global components of the preconditioner. In this paper, a number of such possibilities is explored and it is demonstrated how a great variety of fast algorithms can be designed and analyzed.
Title: Schwarz Methods of Neumann-Neumann Type for Three-Dimensional Elliptic Finite Element Problems
Author(s): Dryja, M.; Widlund, O.
Several domain decomposition methods of Neumann-Neumann type are considered for solving the large linear systems of algebraic equations that arise from discretizations of elliptic problems by finite elements. We will only consider problems in three dimensions. Several new variants of the basic algorithm are introduced in a Schwarz method framework that provides tools which have already proven very useful in the design and analysis of other domain decomposition and multi-level methods.
The Neumann-Neumann algorithms have several advantages over other domain decomposition methods. The subregions, which define the subproblems, only share the boundary degrees of freedom with their neighbors. The subregions can also be of quite arbitrary shape and many of the major components of the preconditioner can be constructed from subprograms available in standard finite element program libraries. However, in its original form, the algorithm lacks a mechanism for global transportation of information and its performance therefore suffers when the number of subregions increases. In the new variants of the algorithms, considered in this paper, the preconditioners include global components, of low rank, to overcome this difficulty. Bounds are established for the condition number of the iteration operator, which are independent of the number of subregions, and depend only polylogarithmically on the number of degrees of freedom of individual local subproblems. Results are also given for problems with arbitrarily large jumps in the coefficients across the interfaces separating the subregions.
Title: Nonholonomic Motion Planning : Algorithms and Software
Candidate: Fernandes, Christopher
Advisor(s): Mishra, Bud
Robot motion planning with nonholonomic constraints has recently engaged the attention of roboticists, as its application in dexterous manipulation, mobile robots and space robotics has begun to be understood. Such constraints arise from two different sources - Rolling Constraints and Non-Integrable Conservation Laws. For instance, the kinematics of dexterous manipulation using hard fingers making contact on a hard object requires nonholonomic motion planning (NMP) in order to satisfy rolling constraint. On the other hand, the control of attitude of space platform-based manipulators using only the internal motion of their manipulator joints requires NMP, as a result of the law of conservation of angular momentum.
Recently some algorithms and their implementation in software have been created in order to understand, simulate and control nonholonomic systems. Currently most of the algorithms have been demonstrated in somewhat specialized applications. There is a great need for software that enables the researcher to quickly test algorithms on these simple systems and then experiment with potential generalizations.
In this thesis, we describe a software system that we have developed at NYU and the underlying principles and algorithms (the ``Basis algorithm''). The system runs on SGI Iris, is written in C with auxiliary tools from Unix, Mathematica, DASSL etc. We shall also describe how we have designed controllers for such example nonholonomic systems as unicyle, space station and space platform-mounted robot manipulator.
It is hoped that this thesis will be useful for the control engineers engaged in designing non-linear control systems, for roboticists studying dexterous manipulations, motion planning and space robotics and finally, for software engineers interested in building tools and applications for robotics.
Title: The Complexity of Resolvent Resolved
Author(s): Gallo, G.; Mishra, B.
The concept of a resolvent of a prime ideal was originally introduced by J.F. Ritt along with the notion of a characteristic set. The motivation for studying resolvents comes from its connections with the birational isomorphisms that describe structures of irreducible algebraic varieties by means of an equivalent hypersurface and a one-to-one rational map. As a result, these ideas have a wide range of applications in such areas as solid modeling, computer aided design and manufacturing. An algorithm to compute the resolvent by means of characteristic sets was first proposed by Ritt. This and some related algorithms have resurfaced as interest in resolvent structures have grown, spurred by its applicability.
Unfortunately, the algebraic complexity of the resolvent and the computational complexity of the associated algorithms have never been satisfactorily explored. In this paper, we give single exponential upper and lower bounds for the degrees of the resolvent and its associated parametrizing polynomials. We also show that the resolvent can be computed deterministically in single exponential sequential and polynomial parallel time complexity. All previous algorithms for resolvent had relied on a random choice of certain extraneous parameters.
Title: Dynamic Impact Analysis: Analyzing Error Propagation in Program Executions
Candidate: Goradia, Tarak
Advisor(s): Weyuker, Elaine
Test adequacy criteria serve as rules to determine whether a test set adequately tests a program. The effectiveness of a test adequacy criterion is determined by its ability to detect faults. For a test case to detect a specific fault, it should execute the fault, cause the fault to generate an erroneous state and propagate the error to the output. Analysis of previously proposed code-based testing strategies suggests that satisfying the error propagation condition is both important and expensive. The technique of dynamic impact analysis is proposed for analyzing a program execution and estimating the error propagation behavior of various potential sources of errors in the execution. Impact graphs are introduced to provide an infrastructure supporting the analysis. A program impact graph modifies the notion of a program dependence graph proposed in the literature in order to capture some of the subtle impact relationships that exist in a program. An execution impact graphs represents the dynamic impact relationships that are demonstrated during a program execution. The notion of impact strength is defined as a quantitative measure of the error sensitivity of an impact. A cost-effective algorithm for analyzing impact relationships in an execution and computing the impact strengths is presented. A research prototype implemented to demonstrate the feasibility of dynamic impact analysis is briefly described. The time complexity of dynamic impact analysis is shown to be linear with respect to the original execution time, and experimental measurements indicate that the constant of proportionality is a small number. The experiments undertaken to validate the computation of impact strengths are presented. An experience study relating impact strengths to error propagation in faulty programs is also presented. The empirical results provide evidence indicating a strong positive correlation between impact strength and error propagation. The results also emphasize the need for better heuristics to improve the accuracy of the error propagation estimates. Potential applications of dynamic impact analysis to mutation testing, syntactic coverage-based testing and dynamic program slicing are discussed.
Title: A Hybrid Algorithm for Optimizing Eigenvalues of Symmetric Definite Pencils
Author(s): Haeberly, J.; Overton, M.
We present an algorithm for the optimization of the maximum eigenvalue of a symmetric definite pencil depending affinely on a vector of parameters. The algorithm uses a hybrid approach, combining a scheme based on the method of centers, developed by Boyd and El Ghaoui, with a new quadratically convergent local scheme. A convenient expression for the generalized gradient of the maximum eigenvalue of the pencil is also given, expressed in terms of a dual matrix. The algorithm computes the dual matrix which establishes the optimality of the computed solution.
Title: Chacterization of Self-Similar with Wavelet Maxima
Author(s): Hwang, W.; Mallat, S.
Self-similar multifractals have a wavelet transform whose maxima define self-similar curves in the scale-space plane. We introduce an algorithm that recovers the affine self-similarity parameters through a voting procedure in the corresponding parameter space. The voting approach is robust with respect to renormalization noises and can recover the value of parameters having random fluctuations. We describe numerical applications to Cantor measures, dyadique multifractals and to the study of Diffusion Limited Aggregates.
Title: Singularity Detection, Noise Reduction and Multifractal Fractal Characterization
Candidate: Hwang, Wen-Liang
Advisor(s): Mallat, Stephane
Most of a signal information is often carried by singularities. We study the characterization of the singularities with the wavelet transform and its modulus maxima. We introduce numerical algorithm to detect and characterize pointwise singularities from the behavior of the wavelet transform maxima across scales. As an application, we develop a denoising algorithm which discriminates the signal information from noise through an analysis of local singularities. In one dimension, we recover a piecewise smooth signal, where the sharp transitions are preserved. In two dimensions, the wavelet maxima algorithm detects and characterizes the edges. The geometrical properties of edges are used to discriminate the noise from the image information and the denoising algorithm restores sharp images even at low SNR.
Multifractals are singular signals having some self-similarity properties. We develop a robust algorithm to extract the fractal parameter of fractional Brownian motion embedded in white noise. Fractal parameters are estimated from the evolution of the variance of the wavelet coefficients across scales with a modified penalty method. Self-similar multifractals have a wavelet transform whose maxima define self-similar curves in the scale-space plane. We introduce an algorithm to recover the affine self-similar parameters with a voting procedure. This voting strategy is robust with respect to renormalization noise. We describe the numerical applications to Cantor measures, dyadique multifractals and to the study of diffusion limited aggregates.
Title: Competitive Algorithms and Lower Bounds for On-Line Scheduling of Multiprocessor Real-Time Systems
Author(s): Koren, G.; Shasha, D.
We study competitive on-line scheduling in multi-processor real-time environments. In our model, every task has a deadline and a value that it obtains only if it completes by its deadline. A task can be assigned to any processor, all of which are equally powerful. The problem is to design an on-line scheduling algorithm (i.e., the scheduler has no knowledge of a task until it is released) with worst case guarantees as to the total value obtained by the system.
We study systems with two or more processors and with uniform or non-uniform value density. We present an inherent limit on the best competitive guarantee that any on-line parallel real-time scheduler can give. Then we present a competitive algorithm that achieves a worst case guarantee which is only within a factor of 2 from the best possible guarantee in many cases. These are the most general results yet known for parallel overloaded real-time scheduling.
Title: Competitive On-line Scheduling for Overloaded Real-Time Systems
Candidate: Koren, Gilad
Advisor(s): Shasha, Dennis; Mishra, Bud
We study competitive on-line scheduling in uniprocessor and multiprocessor real-time environments. In our model, tasks are sporadic and preemptable. Every task has a deadline and a value that the system obtains only if the task completes its execution by its deadline. The aim of a scheduler is to maximize the total value obtained from all the tasks that complete before their deadline.
An on-line scheduler has no knowledge of a task until it is released. The problem is to design an on-line scheduler with worst case guarantees even in the presence of overloaded periods. The guarantee is given in terms of a positive competitive factor. We say that an on-line algorithm has a competitive factor of r , 0 < r <= 1, when under all possible circumstances (i.e, task sets) the scheduler will get at least r times the best possible value. The best value is the value obtained by a clairvoyant algorithm. In contrast to an on-line scheduler, the clairvoyant algorithm knows the entire task set a priori at time zero.
When a uniprocessor system is underloaded there exist several optimal on-line algorithms that will schedule all tasks to completion (e.g., the Earliest Deadline First algorithm). However, under overload, these algorithms perform poorly. Heuristics have been proposed to deal with overloaded situations but these give no worst case guarantees.
We present an optimal on-line scheduling algorithm for uniprocessor overloaded systems called D-over. D-over is optimal in the sense that it has the best competitive factor possible. Moreover, while the system is underloaded, D-over will obtain 100% of the possible value.
In the multiprocessor case, we study systems with two or more processors. We present an inherent limit (lower bound) on the best competitive guarantee that any on-line parallel real-time scheduler can give. Then we present a competitive algorithm that achieves a worst case guarantee which is within a small factor from the best possible guarantee in many cases.
These are the most general results yet known for competitive scheduling of multiprocessor real-time systems.
Title: Matching Pursuits with Time-Frequency Dictionaries , Rev
Author(s): Mallat, S.; Zhang, Z.
We introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions. These waveforms are chosen in order to best match the signal structures. Matching pursuits are general procedures to compute adaptive signal representations. With a dictionary of Gabor functions, a matching pursuit defines an adaptive time-frequency transform. We derive a signal energy distribution in the time-frequency plane, which does not include interference terms, unlike Wigner and Cohen class distributions [?]. A matching pursuit isolates the signal structures that are coherent with respect to a given dictionary. An application to pattern extraction from noisy signals is described. We compare a matching pursuit decomposition with a signal expansion over an optimized wavepacket orthonormal basis, selected with the algorithm of Coifman and Wickerhauser.
Title: Feedback Control of Miniature Direct Drive Devices
Author(s): Max, D.; Wallace, R.
We discuss dynamics and control of miniature direct drive actuators, specifically for the two axis spherical pointing motor and for the one axis direct drive finger joint. These actuators can move both accurately and at high speed, however the capabilities of these devices cannot be fully exploited using open loop control techniques. We derive an ideal PID feedback control scheme. Our initial experiments indicate that PID feedback control for the SPM and finger joint is highly feasible and a significant improvement over open loop methods.
Title: NYU Educational Robotics Project: A Pedagogic Overview
Author(s): Mishra, B.; Antoniotti, M.; Hansen, F.; Wallace, R.
The primary goal of the NYU educational robotics project (NYU-ED) is to create a disseminable, multi-functional and inexpensive laboratory course sequence, aimed at improving the practical skills of undergraduate students specializing in robotics, vision, AI and manufacturing disciplines.
The earlier approaches to robotics laboratory education have been to use either industrial robot arms or commercially available low-power arms. In each case, there have been considerable problems in the lack of an ideal interface, in not providing enough flexibility to add other devices, or in the absence of adequate safety. Also, the underlying dynamical model is usually so complicated that performing any control experiment has been beyond the scope of all but advanced graduate students.
In this report, we describe our approach to deal with these problems in constructing a modern robotics laboratory. The main work-horse of the NYU educational project was chosen to be a multi-functional ED I robot system, consisting of a 4 DOF DD arm and several auxiliary devices. The system was designed to be simple, inexpensive, flexible and safe.
We also describe our experience with some advanced laboratory experiments using miniature direct drive robots that have proven to be ideal for teaching as they are reconfigurable, safe and easy to program.
Title: A Survey of Computational Differential Algebra
Author(s): Mishra, B.
In this note, we explore the computational aspects of several problems in differential algebra with concrete applications in dynamics and motion-planning problems in robotics, automatic synthesis of control schemes for nonlinear systems and simulation of physical systems with fixed degrees of freedom.
Our primary aim is to study, compute and structurally describe the solution of a system of differential equations with coefficients in a field (say, the field of complex numbers, ). There seem to have been various approaches in this direction: e.g. ideal theoretic approach of Ritt, Galois theoretic approach of Kolchin and Singer and group theoretic technique of Lie. It is interesting to study their interrelationship and effectivity of various problems they suggest.
In general, these problems are known to be uncomputable; thus, we need to understand under what situations these problems become feasible. As related computer science questions, we also need to study the complexity of these problems, underlying data-structures, effects of the representation (e.g. sparsity).
Of related interest are some closely-related problems such as symbolic integration problem, solving difference equations, integro-differential equations and differential equations with algebraic constraints.
Title: ED I: NYU Educational Robot Design and Evaluation
Author(s): Mishra, B.; Antoniotti, M.
The primary goal of the NYU educational robot project is to create a disseminable, multi-functional and inexpensive laboratory course sequence, aimed at improving the practical skills of undergraduate students specializing in robotics, vision, AI and manufacturing disciplines.
The main work-horse of the NYU educational project was chosen to be a multi-functional ED I robot system, consisting of a 4 DOF DD arm and several auxiliary devices. The system was designed to be simple, inexpensive, flexible and safe.
In this report, we describe the history, design, structure and evaluation of this robot system. We also describe several robotics and related course sequence that can use the ED I system effectively. We also provide some example experiments that have been run on ED I successfully.
This report has benefited from the labor, contribution, discussions, advice and criticisms of several people on the ED I project team and the credit for the final product goes to the entire team.
Title: Probabilistic Methods in Computer Science and Combinatorics
Candidate: Narayanan, Babu
Advisor(s): Boppana, Ravi
Over the last few years, the Probabilistic method has become an important tool in Computer Science and Combinatorics. This thesis deals with three applications of the Probabilistic method.
The first problem concerns a model of imperfect randomness: the slightly-random source of Santha and Vazirani. In a slightly-random source with bias
$\epsilon$ , the conditional probability that the next bit output is 1, given complete knowledge of the previous bits output, is between
$1/2 - \epsilon$ and
$1/2 +\epsilon$ . We show that, for every fixed
$\epsilon < 1/2 $ , and for most sets, the probability of hitting that set using a slightly-random source is bounded away from 0.
The second problem arises in parallel and distributed computing. A set of n processors is trying to collectively flip a coin, using a protocol that tolerates a large number of faulty processors. We demonstrate the existence of perfect-information protocols that are immune to sets of
$\epsilon n$ faulty processors, for every fixed
$\epsilon< 1/2$ .
Finally, we consider a problem in Ramsey theory. Let an adversary color the edges of the Binomial random graph with r colors, the edge probability being
$ c / (\sqrt n)$ , where c is a large enough constant. We show that, almost surely, a constant fraction of the triangles in the graph will be monochromatic.
Title: Towards Second-Order Methods for Structured Nonsmooth Optimization
Author(s): Overton, M.; Ye, X.
Structured nonsmooth optimization objectives often arise in a composite form f = h ffi a, where h is convex (but not necessarily polyhedral) and a is smooth. We consider the case where the structure of the nonsmooth convex function h is known. Specifically, we assume that, for any given point in the domain of h, a parameterization of a manifold \Omega , on which h reduces locally to a smooth function, is given. We discuss two affine spaces: the tangent space to the manifold \Omega at a point, and the affine hull of the subdifferential of h at the same point, and explain that these are typically orthogonal complements. We indicate how the construction of locally second-order methods is possible, even when h is nonpolyhedral, provided the appropriate Lagrangian, modeling the structure, is used. We illustrate our ideas with two important convex functions: the ordinary max function, and the max eigenvalue function for symmetric matrices, and we solicit other interesting examples with genuinely different structure from the community.
Title: Singularity Detection, Dataflow Analysis of Logic Programs Using Typed Domains
Candidate: Papadopoulos, Georgios
Advisor(s): Harrison, Malcolm C.
Dataflow analysis for logic programming languages has been collecting information about properties of whole terms. As a result pessimistic assumptions had to be made about the substerms of a term, missing information that could be used for better compiler optimization and partial evaluation.
We use type information to divide each term into sets of subterms (t-terms) and then collect dataflow information for these sets. We identify and solve several problems as we develop a new sharing analysis for logic programs using our t-terms in a denotational abstract interpretation framework.
Title: Statistical Recognition of Textured Patterns From Local Spectral Decomposition
Candidate: Perry, Adi
Advisor(s): Lowe, David
Unsupervised segmentation of an image into homogeneously textured regions and the recognition of known texture patterns have been important tasks in computer vision. This thesis presents a new set of algorithms and describes an implemented system which performs these tasks. Initial features are computed from a local multi-channel spectral decomposition of the image that is implemented with Gabor filters. Textures are not assumed to have a band limited frequency spectrum and there is no supposition regarding the image contents: it may contain some unknown texture patterns or regions with no textures at all. Stability of features is enhanced by employing a method for smoothing reliable measurements. Both recognition and segmentation procedures use robust statistical algorithms and are performed locally for small image patches. In particular, statistical classification with principal components is used for recognition. Further accuracy is achieved by employing spatial consistency constraints. When a slanted texture is projected on the image plane, the patterns undergo systematic changes in the density, area, and directionality of the texture elements. Recognition is made invariant to such transformations by representing texture classes with multiple descriptors. These descriptors are computed from carefully selected 3-D views of the patterns. Simulated projection of textures from arbitrary viewpoints are obtained by using a new texture mapping algorithm. The segmentation algorithm overcomes the non-stationarity of the features by employing a new, robust similarity measure. The performance of these methods is demonstrated by applying them to real images.
Title: Automating Physical Database Design: An Extensible Approach
Candidate: Rozen, Steven
Advisor(s): Shasha, Dennis
In a high-level query language such as SQL, queries yield the same result no matter how the logical schema is physically implemented. Nevertheless, a query's cost can vary by orders of magnitude among different physical implementations of the same logical schema, even with the most modern query optimizers. Therefore, designing a low-cost physical implementation is an important pragmatic problem-one that requires a sophisticated understanding of physical design options and query strategies, and that involves estimating query costs, a tedious and error-prone process when done manually.
We have devised a simple framework for automating physical design in relational or post-relational DBMSs and in database programming languages. Within this framework, design options are uniformly represented as ``features'', and designs are represented by ``conflict''-free sets of features. (Mutually exclusive features conflict. An example would be two primary indexes on the same table.) The uniform representation of design options as features accommodates a greater variety of design options than previous approaches; adding a new design option (e.g. a new index type) merely entails characterizing it as a feature with appropriate parameters. We propose an approximation algorithm, based on this framework, that finds low-cost physical designs. In an initial phase, the algorithm examines the logical schema, data statistics, and queries, and generates ``useful features''-features that might reduce query costs. In a subsequent phase, the algorithm uses the DBMS's cost estimates to find ``best features''-features that belong to the lowest-cost designs for each individual query. Finally, the algorithm searches among conflict-free subsets of the best features of all the queries to find organizations with low global cost estimates. We have implemented a prototype physical design assistant for the INGRES relational DBMS, and we evaluate its designs for several benchmarks, including ASSSAP. Our experiments with the prototype show that it can produce good designs, and that the critical factor limiting their quality is the accuracy of query cost estimates. The prototype implementation isolates dependencies on INGRES, permitting our framework to produce design assistants for a wide range of relational, nested-relational, and object-oriented DBMSs.
Title: Two-Level Schwarz Methods for Nonconforming Finite Elements and Discontinuous Coefficients
Author(s): Sarkis, M.
A two-level domain decomposition methods are developed for a simple nonconforming approximation of second order elliptic problems. A bound is established for the condition number of these iterative methods, which grows only logarithmically with the number of degrees of freedom in each subregion. This bound holds for two and three dimensions and is independent of jumps in the value of the coefficients.
Title: Using Line Invariants for Object Recognition by Geometric Hashing
Author(s): Tsai, F.
Geometric Hashing is a model-based object recognition technique for detecting objects which can be partially overlapping or partly occluded. It precompiles, from local geometric features, redundant transformation-invariant information of the models in a hash table and uses the invariants computed from a scene for fast indexing into the hash table to hypothesize possible matches between object instances and object models during recognition.
In its simplest form, the geometric hashing method assumes relatively noise-free data and is applied to objects with points as local features. However, extracting of the locations of point features is inherently error-prone and the analysis of geometric hashing on point sets shows considerable noise sensitivity. Line features can generally be extracted with greater accuracy.
We investigate the use of line features for geometric hashing applied to 2-D (or flat 3-D) object recognition and derive, from a combination of line features, invariants for lines under various geometric transformations.
Title: A Probabilistic Approach to Geometric Hashing Using Line Features
Author(s): Tsai, F.
Most current object recognition algorithms assume reliable image segmentation, which in practice is often not available. We examine the combination of the Hough Transform with a variation of Geometric Hashing as a technique for model-based object recognition in seriously degraded single intensity images. Prior work on the performance analysis of geometric hashing has focused on point features and shown its noise sensitivity. This paper uses line features to compute recognition invariants in a potentially more robust way. We investigate the statistical behavior of these line features analytically. Various viewing transformations, which 2-D (or flat 3-D) objects undergo during image formation, are considered. For the case of affine transformations, which are often suitable substitutes for more general perspective transformations, we show experimentally that the technique is noise resistant and can be used in highly occluded environments.
Title: A Probabilistic Approach to Geometric Hashing using Line Features
Candidate: Tsai, Frank
Advisor(s): Schwartz, Jacob T.
One of the most important goals of computer vision research is object recognition. Most current object recognition algorithms assume reliable image segmentation, which in practice is often not available. This research exploits the combination of the Hough method with the geometric hashing technique for model-based object recognition in seriously degraded intensity images.
We describe the analysis, design and implementation of a recognition system that can recognize, in a seriously degraded intensity image, multiple objects modeled by a collection of lines.
We first examine the factors affecting line extraction by the Hough transform and proposed various techniques to cope with them. Line features are then used as primitive features from which we compute the geometric invariants used by the geometric hashing technique. Various geometric transformations, including rigid, similarity, affine and projective transformations, are examined. We then derive the ``spread'' of computed invariant over the hash space caused by ``perturbation'' of the lines giving rise to this invariant. This is the first of its kind for noise analysis on line features for geometric hashing. The result of the noise analysis is then used in a weighted voting scheme for the geometric hashing technique. We have implemented the system described and carried out a series of experiments on polygonal objects modeled by lines, assuming affine approximations to perspective viewing transformations. Our experimental results show that the technique described is noise resistant and suitable in an environment containing many occlusions.
Title: Miniature Direct Drive Rotary Actuators II: Eye, Finger, and Leg
Author(s): Wallace, R.
We have developed miniature direct drive DC motor actuators for robotics. These actuators have low friction, small size, high speed, low construction cost, no gear backlash, operate safely without the use of limit switches and generate moderate torque at a high torque to weight ratio. Our initial experiments indicated the feasibility of constructing a variety of new high speed low cost actuators, for applications in camera pointing, robot hands, and robot legs. In this work we study some prototype devices in each of these categories.
Title: Voice-Bandwidth Visual Communication Through Logmaps: The Telecortex
Author(s): Wallace, R.; Bederson, B.; Schwartz, E.
We present a robotic video telephone application of the Cortex-1 miniaturized space-variant active vision system. The embedded processor architecture of Cortex-1 enables it to implement a variety of functions not found in conventional video telephones, for example the camera tracks moving users with its pantilt mechanism. We also report an analog channel coding scheme to transmit logmap video images through band-limited analog channels such as the public switched telephone network (PSTN). The transmitter divides the voice frequency band into 768 channels, and modulates two values in quadrature on each channel. Some channels are reserved for special calibration signals enabling the receiver to recover both the phase and magnitude of the transmitted signal. The remaining channels carry pixel intensities. We synthesize the signal in the frequency domain and run the FFT algorithm to implement a fast conversion to a real, time-domain signal. A phase-lock loop keeps the receiver frame-synchronized with the transmitter. We constructed an experimental video telephone that sends 1376 pixel logmap images at 3.9 frames per second through the PSTN. Using the analog channel coding scheme, we achieve an effective data transfer rate in excess of 40000 bits per second.
Title: Space Variant Image Processing
Author(s): Wallace, R.; Ong, P.; Bederson, B.; Schwartz, E.
This paper describes a graph-based approach to image processing, intended for use with images obtained from sensors having space variant sampling grids. The connectivity graph (CG) is presented as a fundamental framework for posing image operations in any kind of space variant sensor. Partially motivated by the observation that human vision is strongly space variant, a number of research groups have been experimenting with space variant sensors. Such systems cover wide solid angles yet maintain high acuity in their central regions. Implementation of space variant systems pose at least two outstanding problems. First, such a system must be active, in order to utilize its high acuity region; second, there are significant image processing problems introduced by the non-uniform pixel size, shape and connectivity. Familiar image processing operations such as connected components, convolution, template matching, and even image translation, take on new and different forms when defined on space variant images. The present paper provides a general method for space variant image processing, based on a connectivity graph which represents the neighbor-relations in an arbitrarily structured sensor. We illustrate this approach with the following applications: (1) Connected components is reduced to its graph theoretic counterpart. We illustrate this on a logmap sensor, which possesses a difficult topology due to the branch cut associated with the complex logarithm function. (2) We show how to write local image operators in the connectivity graph that are independent of the sensor geometry. (3) We relate the connectivity graph to pyramids over irregular tessalations, and implement a local binarization operator in a 2-level pyramid. (4) Finally, we expand the connectivity graph into a structure we call a transformation graph, which represents the effects of geometric transformations in space variant image sensors. Using the transformation graph, we define an efficient algorithm for matching in the logmap images and solve the template matching problem for space variant images. Because of the very small number of pixels typical of logarithmic structured space variant arrays, the connectivity graph approach to image processing is suitable for real-time implementation, and provides a generic solution to a wide range of image processing applications with space variant sensors.
Title: Miniature Direct-Drive Rotary Actuators
Author(s): Wallace, R.
This paper reports the development of direct drive DC motor actuators for miniature robots. The motors are based on Nd-Fe-B rare earth permanent magnets and controlled by low cost microcontrollers. The motors have low friction, small size, high speed, low construction cost, no gear backlash, operate safely without limit switches, have limited self-braking, and generate moderate torque. Significantly, one motor can generate enough torque to lift a second motor of about the same size against the force of gravity, at a distance approximately equal to the size of the motor, without resorting to the use of a counterweight. We demonstrated the feasibility of using these actuators to make a two-link robot finger or leg.