Theses & Reports
Instructions for submitting a technical report or thesis.
You can find technical reports published prior to 1990 archived here.
-
TR1995-681
1995
Complementarity and Nondegeneracy in Semidefinite Programming
Alizadeh, F.;
Haeberly, J.; Overton, M.
Abstract
|
PDF
Title: Complementarity and Nondegeneracy in Semidefinite Programming
Author(s): Alizadeh, F.; Haeberly, J.; Overton, M.
Abstract:
Primal and dual nondegeneracy conditions are defined for semidefinite programming. Given the existence of primal and dual solutions, it is shown that primal nondegeneracy implies a unique dual solution and that dual nondegeneracy implies a unique primal solution. The converses hold if strict complementarity is assumed. Primal and dual nondegeneracy assumptions do not imply strict complementarity, as they do in LP. The primal and dual nondegeneracy assumptions imply a range of possible ranks for primal and dual solutions $X$ and $Z$. This is in contrast with LP where nondegeneracy assumptions exactly determine the number of variables which are zero. It is shown that primal and dual nondegeneracy and strict complementarity all hold generically. Numerical experiments suggest probability distributions for the ranks of $X$ and $Z$ which are consistent with the nondegeneracy conditions.
-
TR1995-682
1995
Computing Limit Loads by Minimizing a Sum of Norms
Andersen, K.;
Christiansen, E.; Overton, M.
Abstract
|
PDF
Title: Computing Limit Loads by Minimizing a Sum of Norms
Author(s): Andersen, K.; Christiansen, E.; Overton, M.
Abstract:
We consider the problem of computing the collapse state in limit analysis for a solid with a quadratic yield condition, such as, for example, the Mises condition. After discretization with the finite element method, using divergence-free elements for the plastic flow, the kinematic formulation turns into the problem of minimizing a sum of Euclidean vector norms, subject to a single linear constraint. This is a nonsmooth minimization problem, since many of the norms in the sum may vanish at the optimal point. However, efficient solution algorithms for this particular convex optimization problem have recently been developed.
The method is applied to test problems in limit analysis in two different plane models: plane strain and plates. In the first case more than 80 percent of the terms in the sum are zero in the optimal solution, causing severe ill-conditioning. In the last case all terms are nonzero. In both cases the algorithm works very well, and we solve problems which are larger by at least an order of magnitude than previously reported. The relative accuracy for the discrete problems, measured by duality gap and feasibility, is typically of the order 1.0E-8. The discretization error, due to the finite grid, depends on the nature of the solution. In the applications reported here it ranges from 1.0E-5 to 1.0E-2.
Keywords: Limit analysis, plasticity, finite element method, nonsmooth optimization.
-
TR1995-707
1995
The Supervisor Synthesis Problem for Unrestricted CTL is NP-complete
Antoniotti, M.;
Mishra, B.
Abstract
|
PDF
Title: The Supervisor Synthesis Problem for Unrestricted CTL is NP-complete
Author(s): Antoniotti, M.; Mishra, B.
Abstract:
The problem of restricting a finite state model (a Kripke structure) in order to satisfy a set of unrestricted CTL formulae is named the ``Unrestricted CTL Supervisor Synthesis Problem''. The finite state model has the characteristics described in \cite{ramadge-wonham87}, that is, its transitions are partitioned between "controllable" and "uncontrollable" ones. The set of CTL formulae represents a specification of the "desired behavior" of the system, which may be achieved through a "control action". This note shows the problem to be NP-complete.
-
Ph.D. Thesis
1995
Synthesis and Verification of Controllers for Robotics and Manufacturing Devices with Temporal Logic and the "Control-D" System
Antoniotti, Marco
Abstract
|
PDF
Title: Synthesis and Verification of Controllers for Robotics and Manufacturing Devices with Temporal Logic and the "Control-D" System
Candidate: Antoniotti, Marco
Advisor(s): Mishra, Bud
Abstract:
This dissertation studies the semi-automated synthesis and verification of control systems for robotics and manufacturing devices using formal methods in a discrete framework, and bears some resemblance to the theory of controlled discrete event systems (CDES) of Ramadge and Wonham. The discrete controller components of a walking machine and of a manufacturing line in the Combat Ration Advanced Manufacturing Technology Demonstration (CRAMTD) of Rutgers University are constructed automatically using the algorithms developed here.
The goal of this research has been to facilitate the integration of CDES theory with the the specification and verification formalisms for finite state systems. Many of our techniques rely on the application of some flavor of temporal logic . In particular, the model-checking techniques of Clarke and Emerson, for branching time temporal logic, proved to be valuable in the implementation of a controller synthesis tool for CDES, called Control-D . The main synthesis algorithm used by the Control-D tool compares favorably with the Ramadge-Wonham algorithm in time and space complexity, while achieving improved expressiveness in its underlying specification language.
-
TR1995-712
1995
A Hierarchical Preconditioner for the Mortar Finite Element Method
Casarin, M. A.;
Widlund, O. B.
Abstract
|
PDF
Title: A Hierarchical Preconditioner for the Mortar Finite Element Method
Author(s): Casarin, M. A.; Widlund, O. B.
Abstract:
Mortar elements form a family of nonconforming finite element methods that are more flexible than conforming finite elements and are known to be as accurate as their conforming counterparts. A fast iterative method is developed for linear, second order elliptic equations in the plane. Our algorithm is modeled on a hierarchical basis preconditioner previously analyzed and tested, for conforming case, by Barry Smith and the second author. A complete analysis and results of numerical experiments are given for lower order mortar elements and geometrically conforming decompositions of the region into subregions.
-
TR1995-704
1995
Diagonal Edge Preconditioners in p-Version and Spectral Element Methods
Casarin, M. A.
Abstract
|
PDF
Title: Diagonal Edge Preconditioners in p-Version and Spectral Element Methods
Author(s): Casarin, M. A.
Abstract:
-
TR1995-705
1995
Quasi-Optimal Schwarz Methods for the Conforming Spectral Element Discretization
Casarin, M. A.
Abstract
|
PDF
Title: Quasi-Optimal Schwarz Methods for the Conforming Spectral Element Discretization
Author(s): Casarin, M. A.
Abstract:
The spectral element method is used to discretize self-adjoint elliptic equations in three dimensional domains. The domain is decomposed into hexahedral elements, and in each of the elements the discretization space is the set of polynomials of degree $N$ in each variable. A conforming Galerkin formulation is used, the corresponding integrals are computed approximately with Gauss-Lobatto-Legendre (GLL) quadrature rules of order $N$, and a Lagrange interpolation basis associated with the GLL nodes is used. Fast methods are developed for solving the resulting linear system by the preconditioned conjugate gradient method. The conforming {\it finite element} space on the GLL mesh, consisting of piecewise $Q_{1}$ or $P_1$ functions, produces a stiffness matrix $K_h$ that is known to be spectrally equivalent to the spectral element stiffness matrix $K_N$. $K_h$ is replaced by a preconditioner $\tilde{K}_h$ which is well adapted to parallel computer architectures. The preconditioned operator is then $\tilde{K}_h^{-1} K_N$.
Our techniques for non-regular meshes make it possible to estimate the condition number of $\tilde{K}_h^{-1} K_N$, where $\tilde{K}_h$ is a standard finite element preconditioner of $K_h$, based on the GLL mesh. The analysis of two finite element based preconditioners: the wirebasket method of Smith, and the overlapping Schwarz algorithm for the spectral element method, are given as examples of the use of these tools. Numerical experiments performed by Pahl are briefly discussed to illustrate the efficiency of these methods in two dimensions.
-
Ph.D. Thesis
1995
Planning in an Imperfect World Using Previous Experiences
Chiu, Jen-Lung
Abstract
|
PDF
Title: Planning in an Imperfect World Using Previous Experiences
Candidate: Chiu, Jen-Lung
Advisor(s): Davis, Ernest
Abstract:
This thesis studies the problem of planning and problem solving in an unpredictable environment by adapting previous experiences. We construct a single agent planning system CADDY and operate it in a simple golf world testbed. The study of CADDY combines the studies of probabilistic, spatial, and temporal reasoning, adapting and reusing plans, and the tradeoff between gains and costs based on various considerations.
The CADDY planning system operates in an uncertain and unpredictable environment. Despite limited perception, incomplete knowledge, and imperfect motion control, CADDY achieves its goal efficiently by finding a plan that is already known to work well in a similar situation and applying repair heuristics to improve it. The capability of adapting experiences makes CADDY a planning system with learning capability.
In this thesis, we discuss the structure of the CADDY planning system and the results of experimental tests of CADDY when we applied to a simulated golf world. We compare CADDY with several other research projects on probabilistic planners and planners which utilizes experiences. We also discuss how CADDY can be characterized in terms of theoretical work on plan feasibility. Finally, we point out possible directions of system extension and generalizations of the idea learned from CADDY to other problem domains. Currently CADDY is not directly applied to real-world problems, but it shows an interesting and promising direction of study. By combining the techniques of probabilistic reasoning, planning, and learning, the performance of planning on real-world domains can be improved dramatically.
-
Ph.D. Thesis
1995
Geodesic Problems in High Dimensions
Choi, Joonsoo
Abstract
|
PDF
Title: Geodesic Problems in High Dimensions
Candidate: Choi, Joonsoo
Advisor(s): Yap, Chee
Abstract:
The geometric shortest path (geodesic) problem can be formulated as follows: given a collection of obstacles in
$\R^d$
, and source and target points$s, t \in \R^d$
, find a shortest obstacle-avoiding path between s and t . This thesis studies the Euclidean geodesic problem in$\R^3$
with polyhedral obstacles and the rectilinear geodesic problem in$\R^d$
with pairwise-disjoint, axes-parallel boxes.Computing Euclidean geodesics in
$\R^3$
with polyhedral obstacles is known to be NP -hard. In contrast, Papadimitriou gave a polynomial-time approximation algorithm for this problem. Unfortunately his complexity analysis involves an unusual mixture of both the algebraic computing model and the bit computing model. In the first part of the thesis, we present a true bit complexity analysis: there is an approximation algorithm that computes a geodesic with relative error$\epsilon > 0$
in$ O((n^3M\log{M} + (nM)^2) \cdot \mu(W)) $
time, where$M=O(nL/\epsilon)$
,$W = O(\log(n/\epsilon)+L)$, and $\mu(W)$
is the time complexity of multiplying two W -bit integers. Our algorithm is a variant of Papadimitriou's algorithm.The second part of the thesis addresses the rectilinear geodesic problem in
$\R^3$
with a set of pairwise-disjoint, axes-parallel boxes. A monotonicity property of rectilinear geodesics is shown: every obstacle-avoiding geodesic between two points is monotone along at least one of coordinate directions. Using the monotonicity property of geodesics, an algorithm computing a geodesic from a query point to a fixed point is presented. The preprocessing time of the algorithm is$O(n^2 \log n)$
and each query takes$O(\log n +k)$
time, where k is the number of edges in a geodesic.The last part of the thesis generalizes the above monotonicity property to every dimensions: given a set of pairwise-disjoint, axes-parallel boxes in
$\R^d$
, every obstacle-avoiding geodesic between two points is monotone along at least one of coordinate directions. -
TR1995-701
1995
On the Dynamic Finger Conjecture for Splay Trees Part II: The Proof
Cole, R.
Abstract
|
PDF
Title: On the Dynamic Finger Conjecture for Splay Trees Part II: The Proof
Author(s): Cole, R.
Abstract:
The following result is shown: On an n-node splay tree, the amortized cost of an access at distance d from the preceding access is O(log (d+1)). In addition, there is an O(n) initialization cost. The accesses include searches, insertions and deletions.
-
TR1995-700
1995
On the Dynamic Finger Conjecture for Splay Trees Part I: Splay Sorting log n-Block Sequences
Cole, R.;
Mishra, B.; Schmidt, J.; Siegel, A.
Abstract
|
PDF
Title: On the Dynamic Finger Conjecture for Splay Trees Part I: Splay Sorting log n-Block Sequences
Author(s): Cole, R.; Mishra, B.; Schmidt, J.; Siegel, A.
Abstract:
A special case of the Dynamic Finger Conjecture is proved; this special case introduces a number of useful techniques.
-
TR1995-711
1995
The Average Case Complexity of Multilevel Syllogistic
Cox, J.;
Ericson, L.; Mishra, B.
Abstract
|
PDF
Title: The Average Case Complexity of Multilevel Syllogistic
Author(s): Cox, J.; Ericson, L.; Mishra, B.
Abstract:
An approach to the problem of developing provably correct programs has been to enrich a theorem prover for Hoare logic with decision procedures for a number of decidable sublanguages of set theory (EMLS, MLS, and extensions) and arithmetic (FPILP) (See [Schwartz, 1977]). Citing results of Goldberg (refer to [Goldberg, 79]) on average case behavior of algorithms for SAT, it was hoped that these decision procedures would perform well on average.
So far, it has been fairly difficult to prove average case NP-hardness under the various definitions (see [Levin, 86], [Ben-David et al, 89], [Blass & Gurevich, 91], [Gurevich, 91], [Venkatesan & Rajagopalan, 92], [Schuler & Yamakami, 92] and [Reischuk & Schindelhauer, 93]). We should note that the definitions in the literature haven't yet been standardized. We survey some of the results of the average case analysis of NP-complete problems, and compare the results of Goldberg with more pessimistic results. We prove that FPILP, EMLS and related fragments of set theory are NP-average complete, and show that there are simple distributions that will frustrate any algorithm for these decision problems.
-
TR1995-714
1995
A Highly Expressive Language of Spatial Constraints
Davis, E.
Abstract
|
PDF
Title: A Highly Expressive Language of Spatial Constraints
Author(s): Davis, E.
Abstract:
AI applications require the representation and manipulation of partial spatial knowledge of many different kinds. This paper argues that a representation rich in primitives but fairly restricted in logical form will suffice for many of these purposes. We present and discuss one such representation language. We demonstrate that the language is expressive enough to capture exactly or closely approximate many of the representations that have been used in the AI literature. It also contains some original constructs for dealing with collections of regions of unknown cardinality.
-
TR1995-706
1995
Approximation and Abstraction in Solid Object Kinematics
Davis, E.
Abstract
|
PDF
Title: Approximation and Abstraction in Solid Object Kinematics
Author(s): Davis, E.
Abstract:
Physical reasoning often involves approximating or abstracting the situation or the theory at hand. This paper studies the nature of approximation and abstraction as applied to the kinematic theory of rigid solid objects.
Five categories of approximation are considered: 1. Geometric approximation. 2. Abstraction of a complex kinematic structure by a simpler kinematic structure. For example, the abstraction of a collection of tightly-linked objects as a single object. 3. Abstraction of a kinematic structure by a simpler theory. For example, the abstraction by a connectivity graph in configuration space. 4. Approximation of a complex kinematic structure by a simpler structure in a more complex theory. For example, the approximation of a chain by a string. 5. Approximation of a more complex theory by a kinematic theory. For example, the approximation of solid object dynamics by kinematics.
We discuss how some of these types of approximation can be implemented and combined. We conclude that abstraction and approximation are open-ended styles of reasoning, rather than neatly categorizable meta-relationships.
-
TR1995-703
1995
Approximations of Shape and Configuration Space
Davis, E.
Abstract
|
PDF
Title: Approximations of Shape and Configuration Space
Author(s): Davis, E.
Abstract:
We consider the issue of shape approximation in kinematic mechanical systems; that is, systems of rigid solid objects whose behavior can be characterized entirely in terms of the constraints that each object moves rigidly and that no two objects overlap, without considering masses or forces. The general question we address is the following: Suppose we have calculated the behavior of some kinematic system using ideal descriptions of the shapes of the objects involved. Does it then follow that a real mechanism, in which the shape of the objects approximates this ideal will have a similar behavior? In addressing this question, we present various possible definitions of what it means (a) for one shape to approximate another and (b) for the behavior of one mechanism to be similarto the behavior of another. We characterize the behavioral properties of a kinematic system in terms of its configuration space; that is, the set of physically feasible positions and orientations of the objects. We prove several existential theorems that guarantee that a sufficiently precise approximation of shape preserves significant properties of configuration space. In particular, we show that It is often possible to guarantee that the configuration space of system A is close to that of system B in terms of metric criteria by requiring that the shapes of A closely approximate those of B in terms of the dual-Hausdorff distance. It is often possible to guarantee further that the configuration space of A is topologically similar to that of B by requiring that the surface normals are close at corresponding boundary points of A and B.
-
Ph.D. Thesis
1995
Practical Structures for Parallel Operating Systems
Edler, Jan
Abstract
|
PDF
Title: Practical Structures for Parallel Operating Systems
Candidate: Edler, Jan
Advisor(s): Gottlieb, Allan
Abstract:
Large shared memory MIMD computers, with hundreds or thousands of processors, pose special problems for general purpose operating system design. In particular:
- 1.
- Serial bottlenecks that are insignificant for smaller machines can seriously limit scalability.
- 2.
- The needs of parallel programming environments vary greatly, requiring a flexible model for runtime support.
- 3.
- Frequent synchronization within parallel applications can lead to high overhead and bad scheduling decisions.
Because of these difficulties, the algorithms, data structures, and abstractions of conventional operating systems are not well suited to highly parallel machines.
We describe the Symunix operating system for the NYU Ultracomputer, a machine with hardware support for Fetch&Phi operations and combining of memory references. Avoidance of serial bottlenecks, through careful design of interfaces and use of highly parallel algorithms and data structures, is the primary goal of the system. Support for flexible parallel programming models relies on user-mode code to implement common abstractions such as threads and shared address spaces. Close interaction between the kernel and user-mode runtime layer reduces the cost of synchronization and avoids many scheduling anomalies.
Symunix first became operational in 1985 and has been extensively used for research and instruction on two generations of Ultracomputer prototypes at NYU. We present data gathered from actual multiprocessor execution to support our claim of practicality for future large systems.
-
Ph.D. Thesis
1995
Dreme: for Life in the Net
Fuchs, Matthew
Abstract
|
PDF
Title: Dreme: for Life in the Net
Candidate: Fuchs, Matthew
Advisor(s): Perlin, Ken
Abstract:
This dissertation makes four contributions towards supporting distributed, multi-user applications over open networks.
Dreme, a distributed dialect of the Scheme language in which all first-class language objects are mobile in the network. In particular, various distributed topologies, such as client/server and peer-to-peer, can be created by migrating closures with overlapping scopes around the network, correct inter-process communication being assured by Scheme's lexical scoping rules and network wide addressing. Threads of control are passed around through first-class distributed continuations.
A User Interface toolkit for coordinating events in multi-threaded, multi-user applications by organizing continuation callbacks into nested lexical scopes. Each event has certain attributes, such as synchronous/asynchronous. Certain events create new scopes with new events. Continuation callbacks allow both synchronous events which return values to their callers, and asynchronous ones. Application needn't be spread throughout the application, as with applications using an event-loop.
A distributed garbage collection algorithm that collects all cycles on an open network. The basic algorithm depends on maintaining the inverse reference graph (IRG) among network nodes (i.e., if a->b is in the regular graph, b->a is in the IRG). A single IRG traversal from any object determines the status of each object touched. Communication is decentralized (any object can choose to determine its status), garbage is touched O(1) times (in the absence of failures), it is fault-tolerant, and can handle malicious or faulty neighbors. Each operation uses messages linear in the size of the IRG. Overlapping operations perform like parallel quick sort.
An approach to using the Standard Generalized Markup Language (SGML) over the network to support distributed GUIs, intelligent clients, and mobile agents. SGML is a meta-grammar for creating domain specific document markup languages to which a variety of semantics (display, reading/writing databases, etc.) can be applied. The document, its grammar, and some semantics, are retrieved over the network. Applications normally create interfaces directly out of graphic objects to communicate with the user. However, if the interface has some semantics (and is parsable), a computational agent can interpret the interface and talk directly to the application on behalf of the human.
-
Ph.D. Thesis
1995
Fault-tolerant Parallel Processing Combining Linda, Checkpointing, and Transactions
Jeong, Karpjoo
Abstract
|
PDF
Title: Fault-tolerant Parallel Processing Combining Linda, Checkpointing, and Transactions
Candidate: Jeong, Karpjoo
Advisor(s): Shasha, Dennis
Abstract:
With the advent of high performance workstations and fast LANs, networks of workstations have recently emerged as a promising computing platform for long-running coarse grain parallel applications. Their advantages are wide availability and cost-effectiveness, as compared to massively parallel computers. Long-running computation in the workstation environment, however, requires both fault tolerance and the effective utilization of idle workstations.
In this dissertation, we present a variant of Linda, called Persistent Linda (PLinda), that treats these two issues uniformly: specifically, PLinda treats non-idleness as failure.
PLinda provides a combination of checkpointing and transaction support on both data and program state (an encoding of continuations). The traditional transaction model is simplified and then extended to support robust parallel computation. Treatable failures include processor and main memory hard and slowdown failures, and network omission and corruption failures.
The programmer can customize fault tolerance when constructing an application, trading failure-free performance against recovery time. When creating a PLinda program, the programmer can decide on the frequency of transactions and the encoding of continuations to be saved upon transaction commit. At runtime, the programmer can decide to suppress certain continuations for better failure-free performance.
PLinda has been applied to corporate bond index statistics computation and biological pattern recognition.
-
TR1995-692
1995
A Knowledge Representation Based on the Belnap's Four-Valued Logic
Kaluzhny, Y.;
Muravitsky, A.
Abstract
|
PDF
Title: A Knowledge Representation Based on the Belnap's Four-Valued Logic
Author(s): Kaluzhny, Y.; Muravitsky, A.
Abstract:
We treat knowledge from a computer-oriented point of view, considering it as a kind of data type. An axiomatic approach to the notion of data type undertaken by Dana Scott in [D.S.Scott, Outline of a Mathematical Theory of Computation, in: Proceedings of Princeton Conference on Information Science, 1971, pp. 169--176], is explored to find entities suitable for representation techniques. At the same time, we stay in Belnap's paradigm of the toleration of inconsistency. We propose a representation of knowledge (possible with contradictions) in simple propositional language, and we show how such knowledge can be maintained and how it should be transformed on receipt of new information. In this transformation, the key role is played by Scott's continuity rather than consistency.
-
TR1995-695
1995
Dirichlet Problem for the Schrodinger Operator in a Half-space with Boundary Data of Arbitrary Growth at Infinity
Kheyfits, A.
Abstract
|
PDF
Title: Dirichlet Problem for the Schrodinger Operator in a Half-space with Boundary Data of Arbitrary Growth at Infinity
Author(s): Kheyfits, A.
Abstract:
We consider the Dirichlet problem for the Schrodinger operator in a half-space with boundary data having an arbitrary growth at infinity. A solution is constructed as the generalized Poisson integral. Uniqueness of the solution is investigated too.
-
TR1995-683
1995
An Optimal Preconditioner for a Class of Saddle Point Problems with a Penalty Term, Part II: General Theory
Klawonn, A.
Abstract
|
PDF
Title: An Optimal Preconditioner for a Class of Saddle Point Problems with a Penalty Term, Part II: General Theory
Author(s): Klawonn, A.
Abstract:
Iterative methods are considered for saddle point problems with penalty term. A positive definite preconditioner is constructed and it is proved that the condition number of the preconditioned system can be made independent of the discretization and the penalty parameters. Examples include the pure displacement problem in linear elasticity, the Timoshenko beam, and the Mindlin-Reissner plate.
Key words: Saddle point problems, penalty term, nearly incompressible materials, Timoshenko, Mindlin-Reissner, preconditioned conjugate residual method, multilevel, domain decomposition.
Please note: This report is a revised version of tr676.
-
TR1995-699
1995
Run-time versus Compile-time Instruction Scheduling in Superscalar (RISC) Processors: Performance and Tradeoffs
Leung, A.;
Palem, K.; Ungureanu, C.
Abstract
|
PDF
Title: Run-time versus Compile-time Instruction Scheduling in Superscalar (RISC) Processors: Performance and Tradeoffs
Author(s): Leung, A.; Palem, K.; Ungureanu, C.
Abstract:
The RISC revolution has spurred the development of processors with increasing levels of instruction level parallelism (ILP). In order to realize the full potential of these processors, multiple instructions must be issued and executed in a single cycle. Consequently, instruction scheduling plays a crucial role as an optimization in this context. While early attempts at instruction scheduling were limited to compile-time approaches, the recent trend is to provide dynamic support in hardware. In this paper, we present the results of a detailed comparative study of the performance advantages to be derived by the spectrum of instruction scheduling approaches: from limited basic-block schedulers in the compiler, to novel and aggressive run-time schedulers in hardware. A significant portion of our experimental study via simulations, is devoted to understanding the performance advantages of run-time scheduling. Our results indicate it to be effective in extracting the ILP inherent to the program trace being scheduled, over a wide range of machine and program parameters. Furthermore, we also show that this effectiveness can be further enhanced by a simple basic-block scheduler in the compiler, which optimizes for the presence of the run-time scheduler in the target; current basic-block schedulers are not designed to take advantage of this feature. We demonstrate this fact by presenting a novel enhanced basic-block scheduler in this paper. Finally, we outline a simple analytical characterization of the performance advantage, that run-time schedulers have to offer.
Key words: Compile-time Optimizations, Dynamic Schedulers, Instruction Scheduling, Program Traces, Scope, Superscalar Processors
-
Ph.D. Thesis
1995
A Model-Based 3-D Object Recognition System Using Geometric Hashing with Attributed Features
Liu, Jyhjong
Abstract
|
PDF
Title: A Model-Based 3-D Object Recognition System Using Geometric Hashing with Attributed Features
Candidate: Liu, Jyhjong
Advisor(s): Hummel, Robert
Abstract:
We build an object recognition system that is able to recognize 3-D objects such as vehicles embedded in highly complicated backgrounds. We use the geometric hashing method, augmenting the approach through the use of attributed features , k -d trees for access to features, and the use of bounds in order to limit the search.
We make use of expressive features to improve the performance of a geometric hashing object recognition system. Various kinds of attributed features, such as the midpoint of a line segment with its orientation, the endpoints of a line segment with its orientation, and the center and the circle features are extracted and used in our system.
The number of features as well as the type of features in each model can vary. We make use of weighted voting, which has a Bayesian interpretation. The distribution of the invariants for various features as well as the bounds of the weighted voting formula are analyzed. In order to improve the performance of the system, we use a k-d tree to search entries in high-dimensional hash tables. The method is generalized in order to treat variables taking on values from a non-interval domain, such as data measuring angles. To make use of available computer resources, we distribute the computation, assigning evidence accumulation for a single hypothesis to one processor in a multiple processor and multiple workstation environment. The implementation reduces the communication overhead to minimum. The system is implemented using the Khoros software development system.
The results of target recognition are reported in numerous experiments. The experiments show that the use of more expressive features improves the performance of the recognition system.
-
TR1995-713
1995
Computational Real Algebraic Geometry
Mishra, B.
Abstract
|
PDF
Title: Computational Real Algebraic Geometry
Author(s): Mishra, B.
Abstract:
Computational real algebraic geometry studies various algorithmic questions dealing with the real solutions of a system of equalities, inequalities, and inequations of polynomials over the real numbers. This emerging field is largely motivated by the power and elegance with which it solves a broad and general class of problems arising in robotics, vision, computer aided design, geometric theorem proving, etc.
The following survey paper discusses the underlying concepts, algorithms and a series of representative applications. This paper will appear as a chapter in the "Handbook of Discrete and Computational Geometry" (Edited by J.E. Goodman and J. O'Rourke), CRC Series in Discrete and Combinatorial Mathematics.
-
TR1995-680
1995
Grasp Metrics: Optimality and Complexity
Mishra, B.
Abstract
|
PDF
Title: Grasp Metrics: Optimality and Complexity
Author(s): Mishra, B.
Abstract:
In this paper, we discuss and compare various metrics for goodness of a grasp. We study the relations and trade-offs among the goodness of a grasp, geometry of the grasped object, number of fingers and the computational complexity of the grasp-synthesis algorithms. The results here employ the techniques from convexity theory first introduced by the author and his colleagues.
-
TR1995-709
1995
Three Finger Optimal Planar Grasp
Mishra, B.;
Teichman, M.
Abstract
|
PDF
Title: Three Finger Optimal Planar Grasp
Author(s): Mishra, B.; Teichman, M.
Abstract:
In this paper, we study various algorithmic questions regarding the computation of an optimal three finger planar grasp. We present a novel O(n^2 log n)-time algorithm to compute such an optimal grasp for an arbitrary simple n-gon. This algorithm can be used for finding ``good'' immobilizing sets. We also discuss several variations on the problem and many intriguing open questions in the area that remain unsolved.
-
TR1995-710
1995
On the Lidskii-Vishik-Lyusternik Perturbation Theory for Eigenvalues of Matrices with Arbitrary Jordan Structure
Moro, J.;
Burke, J.V.; Overton, M.L.
Abstract
|
PDF
Title: On the Lidskii-Vishik-Lyusternik Perturbation Theory for Eigenvalues of Matrices with Arbitrary Jordan Structure
Author(s): Moro, J.; Burke, J.V.; Overton, M.L.
Abstract:
Let $A$ be a complex matrix with arbitrary Jordan structure, and $\lambda$ an eigenvalue of $A$ whose largest Jordan block has size $n$. We review previous results due to Lidskii, showing that the splitting of $\lambda$ under a small perturbation of $A$ of order $\epsilon$, is, generically, of order $\epsilon^{1/n}$. Explicit formulas for the leading coefficients are obtained, involving the perturbation matrix and the eigenvectors of $A$. We also present an alternative proof of Lidskii's main theorem, based on the use of the Newton diagram. This approach clarifies certain difficulties which arise in the nongeneric case, and leads, in some situations, to the extension of Lidskii's results. These results suggest a new notion of Holder condition number for multiple eigenvalues, depending only on the conditioning of the associated eigenvectors, not the conditioning of the Jordan vectors.
-
TR1995-698
1995
Combined Instruction Scheduling and Register Allocation
Motwani, R.;
Palem, K.; Sarkar, V.; Reyen, S.
Abstract
|
PDF
Title: Combined Instruction Scheduling and Register Allocation
Author(s): Motwani, R.; Palem, K.; Sarkar, V.; Reyen, S.
Abstract:
In this paper, we describe a novel framework for expressing an optimization problem that simultaneously addresses instruction scheduling and register allocation, referred to as CRISP. By modeling spill-costs directly in the scheduling problem, CRISP permits the design of algorithms whose objective is to exploit the available instruction level parallelism --- the traditional goal of instruction scheduling --- while lowering the cost of register spilling at the same time. Currently, these optimizations are performed in separate phases and interact in ways that are not characterized very well, leading to phase-ordering problems. We also develop a fast heuristic in this paper for solving this combined optimization in the context of basic-blocks; our algorithm runs in time O ( E N) where the basic block of N instructions has E edges; this time includes all preprocessing costs. In comparison to conventional phase-ordered approaches, our combined heuristic performed well in experimental evaluations on basic-blocks with sizes in the range 5 to 100. We also present rigorous characterizations of the inherent difficulty of solving optimization problems in our CRISP framework, as well as in classical frameworks. A surprising outcome of this work is that problems expressed in CRISP are provably easier to solve, than say graph coloring --- graph coloring is the classical approach to expressing just one of the two phases of interest, namely register allocation. By eliminating the phase-ordering problem, CRISP lowers the overall complexity of the software engineering effort involved. This is because optimizers designed based on our unified approach will be relatively ``lightweight,'' when compared to those that have to cope with phase-ordering. This has positive implications both for the duration of the design cycles, as well as the concomitant costs of designing low-level optimizations in modern compilers.
-
TR1995-690
1995
A Framework for Knowledge-Based Systems
Muravitsky, A.
Abstract
|
PDF
Title: A Framework for Knowledge-Based Systems
Author(s): Muravitsky, A.
Abstract:
The paper continues the theme of [ knowledge.ps ]. We differentiate between our approach to knowledge representation and that of others by expressing the following Working Hypothesis: Knowledge is a data type, and knowledge revision is accomplished by continuous operations on it, which are coordinated with its effective basis. Staying in the limits of Belnap's paradigm of the admittance of contradictory information into the computer's memory, our purpose in this paper is to reduce as much as possible all the computable processes needed for modifing the current state of the computer's knowledge and describe conditions for possible maneuvering. In particular, we solve some problems of decidability concerning operations on the minimal states, which are regarded as natural knowledge transformers. We show, also, how to express those operations in lattice theory terms that leads to the simplification of their computation on the lattice of minimal states. The problem of backtracking in the presented context is considered as well.
-
TR1995-693
1995
A Perspective of New Foundations for Knowledge Maintenance Systems: Research Program
Muravitsky, A.
Abstract
|
PDF
Title: A Perspective of New Foundations for Knowledge Maintenance Systems: Research Program
Author(s): Muravitsky, A.
Abstract:
We propose to provide new mathematical foundations for the design of knowledge-based systems. The underlying idea is that the knowledge which the computer (``artificial agent'') operates with is considered as a kind of abstract data type. In this context, a relation of approximation arises in a natural way when one imagines the computer as operating in a changing information environment (``information flow''). This notion of pproximation can be studied using the techniques that have been developed for domain theory in the context of denotational semantics of programming languages.
-
TR1995-689
1995
Knowledge Representation as Domains
Muravitsky, A.
Abstract
|
PDF
Title: Knowledge Representation as Domains
Author(s): Muravitsky, A.
Abstract:
This is a continuing attempt in a series of papers [ knowledge.ps, inform.ps, frame.ps ] to show how computer-represented knowledge can be arranged as elements of an effectively represented semantic domain in the sense of [C.A.Gunter and D.S.Scott, Semantic Domains, in: J. van Leeuwen (ed.), Handbook of Theoretical Computer Science, Vol. B, pp. 635--674]. We present a direct deductive description of the domain, which was defined semantically in [ knowledge.ps ], via the Scott's notion of information system. Also, the internal structure of the continuous ampliative operations coordinated with the domain's effective basis is established. Though we always remain in the paradigm of the toleration of contradictory information described in [N.D.Belnap, A Useful Four-Valued Logic: How a Computer Should Think, in: A.R.Anderson, N.D.Belnap, and J.M.Dunn, Entailment: the Logic of Relevance and Necessity, Vol. 2, Princeton Univ. Press, 1992], the approach in question could be extended to include domains for consistency knowledge bases.
-
TR1995-694
1995
Logic of Information Knowledge
Muravitsky, A.
Abstract
|
PDF
Title: Logic of Information Knowledge
Author(s): Muravitsky, A.
Abstract:
We share with some philosophers the view that a state of knowledge, being a part of the real world, can bring contradiction into it. Such an ontological reading of knowledge is very important when one deals with information knowledge, which arises as the content of the computer's memory when the computer is placed into changeable information environment ("information flow"), and "must" be able to tolerate any (not excluding contradictions) from the computer's users. Continuing research begun in [KM 93], we consider in length one kind of Scott-continuous operations introduced there. Each such operation [A->B](x), where A and B are formulas in a propositional language, called a rule, moves the computer to a "minimal" state of knowledge, in which B is true, if in a current state A is true. Note that the notion of rule is used here in an information-transforming sense, rather than in the ordinary truth-sound sense. We distinguish between global and local rules and show that these notions are decidable. Also, we define a modal epistemic logic as a tool for the prediction of possible evolution of the system's knowledge and establish decidability of this logic.
-
TR1995-688
1995
On the First Degree Entailment of Two 3-Valued Logics
Muravitsky, A.
Abstract
|
PDF
Title: On the First Degree Entailment of Two 3-Valued Logics
Author(s): Muravitsky, A.
Abstract:
We note first that the first degree entailment of {\L}ukasiewicz's 3-valued logic and a 3-valued logic that is extracted of Belnap's 4-valued logic is the same. Then, we give an axiomatization of that entailment as the calculus E_{fde} + A &-A- > B\/-B, where E_{fde} is the first degree entailment of Anderson-Belnap's logic E of relevance and necessity.
-
TR1995-691
1995
Some Knowledge Transformers: Infons and Constraints
Muravitsky, A.
Abstract
|
PDF
Title: Some Knowledge Transformers: Infons and Constraints
Author(s): Muravitsky, A.
Abstract:
The goal of this paper is twofold. First, it is to present a general scheme within which information is supposed to turn into the computer-represented knowledge and, second, to define two natural kinds of transfomers of this knowledge which this scheme thrusts us into considering.
-
TR1995-697
1995
New Mathematical Foundations for Knowledge Maintenance Systems: Research Program
Muravitsky, Alexei Yu.
Abstract
|
PDF
Title: New Mathematical Foundations for Knowledge Maintenance Systems: Research Program
Author(s): Muravitsky, Alexei Yu.
Abstract:
We propose to provide new mathematical foundations for the design of knowledge-based systems. The underlying idea is that the knowledge which the computer ("artificial agent") operates with is considered as a kind of abstract data type. In this context, a relation of approximation arises in a natural way when one imagines the computer as operating in a changing information environment ("information flow"). This notion of approximation can be studied using the techniques that have been developed for domain theory in the context of denotational semantics of programming languages.
-
TR1995-696
1995
Some Knowledge Transformers: Infons and Constraints
Muravitsky, Alexei Yu.
Abstract
|
PDF
Title: Some Knowledge Transformers: Infons and Constraints
Author(s): Muravitsky, Alexei Yu.
Abstract:
The goal of this paper is twofold. First, it is to present a general scheme within which information is supposed to turn into the computer-represented knowledge and, second, to define two natural kinds of transfomers of this knowledge which this scheme thrusts us into considering.
-
TR1995-686
1995
Double Hashing is Computable and Randomizable with Universal Hash Functions
Schmidt, J.;
Siegel, A.
Abstract
|
PDF
Title: Double Hashing is Computable and Randomizable with Universal Hash Functions
Author(s): Schmidt, J.; Siegel, A.
Abstract:
Universal hash functions that exhibit (c log n)-wise independence are shown to give a performance in double hashing and virtually any reasonable generalization of double hashing that has an expected probe count of 1/(1-alpha) + epsilon for the insertion of the (alpha n)-th item into a table of size n, for any fixed alpha < 1 and epsilon > 0. This performance is within epsilon of optimal. These results are derived from a novel formulation that overestimates the expected probe count by underestimating the presence of partial items already inserted into the hash table, and from a sharp analysis of the underlying stochastic structures formed by colliding items.
-
TR1995-708
1995
An API for Choreographing Data Accesses
Shriver, E.A.M.;
Wisniewski, L.F.
Abstract
|
PDF
Title: An API for Choreographing Data Accesses
Author(s): Shriver, E.A.M.; Wisniewski, L.F.
Abstract:
Current APIs for multiprocessor multi-disk file systems are not easy to use in developing out-of-core algorithms that choreograph parallel data accesses. Consequently, the efficiency of these algorithms is hard to achieve in practice. We address this deficiency by specifying an API that includes data-access primitives for data choreography. With our API, the programmer can easily access specific blocks from each disk in a single operation, thereby fully utilizing the parallelism of the underlying storage system.
Our API supports the development of libraries of commonly-used higher-level routines such as matrix-matrix addition, matrix-matrix multiplication, and BMMC (bit-matrix-multiply/complement) permutations. We illustrate our API in implementations of these three high-level routines to demonstrate how easy it is to use.
-
TR1995-687
1995
Closed Hashing is Computable and Optimally Randomizable with Universal Hash Functions
Siegel, A.;
Schmidt, J.
Abstract
|
PDF
Title: Closed Hashing is Computable and Optimally Randomizable with Universal Hash Functions
Author(s): Siegel, A.; Schmidt, J.
Abstract:
-
TR1995-684
1995
On Universal Classes of Extremely Random Constant Time Hash Functions and their Time-space Tradeoff
Siegel, A.
Abstract
|
PDF
Title: On Universal Classes of Extremely Random Constant Time Hash Functions and their Time-space Tradeoff
Author(s): Siegel, A.
Abstract:
A family of functions F that map [0,n]->[0,n], is said to be h-wise independent if any h points in [0,n] have an image, for randomly selected f in F, that is uniformly distributed. This paper gives both probabilistic and explicit randomized constructions of (n**epsilon)-wise independent functions, for epsilon < 1, that can be evaluated in constant time for the standard random access model of computation. Simple extensions give comparable behavior for larger domains. As a consequence, many probabilistic algorithms can for the first time be shown to achieve their expected asymptotic performance for a feasible model of computation.
This paper also establishes a tight tradeoff in the number of random seeds that must be precomputed for a random function that runs in time T and is h-wise independent.
-
TR1995-685
1995
Toward a Usable Theory of Chernoff Bounds for Heterogeneous and Partially Dependent Random Variables
Siegel, A.
Abstract
|
PDF
Title: Toward a Usable Theory of Chernoff Bounds for Heterogeneous and Partially Dependent Random Variables
Author(s): Siegel, A.
Abstract:
Let X be a sum of real valued random variables and have a bounded mean E[X]. The generic Chernoff-Hoeffding estimate for large deviations of X is: P{X-E[X]>=a}<=min_{y>=0}exp(-y(a+E[X]))E[exp(y X)], which applies with a>=0 to random variables with very small tails. At issue is how to use this method to attain sharp and useful estimates. We present a number of Chernoff-Hoeffding bounds for sums of random variables that may have a variety of dependent relationships and that may be heterogeneously distributed.
-
Ph.D. Thesis
1995
Grasping and Fixturing: a Geometric Study and an Implementation
Teichmann, Marek
Abstract
|
PDF
Title: Grasping and Fixturing: a Geometric Study and an Implementation
Candidate: Teichmann, Marek
Advisor(s): Mishra, Bud
Abstract:
The problem of immobilizing an object by placing ``fingers'' (or points) on its boundary occurs in the field of dexterous manipulation, manufacturing and geometry. In this dissertation, we consider the purely static problems of good grasp and fixture set synthesis, and explore their connection to problems in computational and combinatorial geometry. Two efficient randomized approximation algorithms are proposed for finding the smallest cover for a given convex set and for finding the largest magnitude by which a convex set can be scaled and still be covered by a cover of a given size. They generalize an algorithm by Clarkson. The cover points are selected from a set of n points. The following bounds are valid for both types of problems. For the former, c is the size of the optimal cover, and for the latter, c is the desired cover size. In both cases, a cover of size
$4 cd \lg c$
is returned.The running time depends on the set to be covered. Covering an n -vertex polytope in
$R^d$
takes$O(c^2 n \log n \log c)$
expected time, and covering a ball takes$O(nc^{1+\delta}+c^{\lfloor{d/2}\rfloor+1}\log n\log^{\lfloor{d/2}\rfloor} c)$
expected time. These algorithms have applications to finding a good grasp or fixture set. An
$O(n^2 \log n)$
algorithm for finding optimal 3 finger grasps for n sided polygons is also given.We also introduce a new grasp efficiency measure based on a certain class of ellipsoids, invariant under rigid motions of the object coordinate system. To our knowledge, this is the first measure having this property. We also introduce a new reactive grasping paradigm which does not require a priori knowledge of the object. This paradigm leads to several reactive algorithms for finding a grasp for parallel jaw grippers and three finger robot hands equipped with simple sensors. We show their correctness and discuss our implementation of one such algorithm: a parallel jaw gripper with light-beam sensors which we have built. A short video demonstration will also be shown.
-
TR1995-679
1995
Report on NSF Workshop on Manufacturing and Computational Geometry
Yap, C.
Abstract
|
PDF
Title: Report on NSF Workshop on Manufacturing and Computational Geometry
Author(s): Yap, C.
Abstract:
This is a summary of the NSF Workshop on Manufacturing and Computational Geometry, held at the Courant Institute of Mathematical Sciences, New York University, on April 1-2, 1994. The meeting brought together about 30 participants from both the manufacturing and the computational geometry communities for the purposes of discussing current trends in the two communities, identifying areas of mutual interest, and proposing future joint activities.