Theses & Reports
Instructions for submitting a technical report or thesis.
You can find technical reports published prior to 1990 archived here.
-
TR1997-735
1997
Iterative Substructuring Preconditioners for Mortar Element Methods in Two Dimensions
Achdou, Y.;
Maday, Y.; Widlund, O. B.
Abstract
|
PDF
Title: Iterative Substructuring Preconditioners for Mortar Element Methods in Two Dimensions
Author(s): Achdou, Y.; Maday, Y.; Widlund, O. B.
Abstract:
The mortar methods are based on domain decomposition and they allow for the coupling of different variational approximations in different subdomains. The resulting methods are nonconforming but still yield optimal approximations. In this paper, we will discuss iterative substructuring algorithms for the algebraic systems arising from the discretization of symmetric, second order, elliptic equations in two dimensions. Both spectral and finite element methods, for geometrically conforming as well as nonconforming domain decompositions, are studied. In each case, we obtain a polylogarithmic bound on the condition number of the preconditioned matrix.
-
TR1997-734
1997
SDPPACK User's Guide -- Version 0.8 Beta
Alizadeh, F.;
Haeberly, J.; Nayakkankuppam, M.V.; Overton, M.L.
Abstract
|
PDF
Title: SDPPACK User's Guide -- Version 0.8 Beta
Author(s): Alizadeh, F.; Haeberly, J.; Nayakkankuppam, M.V.; Overton, M.L.
Abstract:
-
TR1997-737
1997
SDPPACK User's Guide -- Version 0.9 Beta for Matlab 5.0
Alizadeh, F.;
Haeberly, J. A.; Nayakkankuppa, M. V.; Overton, M.L.; Schmieta, S.
Abstract
|
PDF
Title: SDPPACK User's Guide -- Version 0.9 Beta for Matlab 5.0
Author(s): Alizadeh, F.; Haeberly, J. A.; Nayakkankuppa, M. V.; Overton, M.L.; Schmieta, S.
Abstract:
This report describes SDPpack Version 0.9 Beta for Matlab 5.0. This version extends the previous release for semidefinite programming (SDP) to mixed semidefinite--quadratic--linear programs (SQLP), i.e.\ linear optimization problems over a product of semidefinite cones, quadratic cones and the nonnegative orthant. Together, these cones make up all possible homogeneous self-dual cones over the reals.
The main routine implements a primal--dual Mehrotra predictor--corrector scheme based on the XZ+ZX search direction for SDP. More specialized routines are also available, one to solve SDP's with diagonal constraints only, and one to compute the Lov\'asz $\theta$ function of a graph, both using the XZ search direction. Routines are also provided to determine whether an SQLP is primal or dual degenerate at its solution and whether strict complementarity holds there. Primal nondegeneracy is associated with dual uniqueness and dual nondegeneracy with primal uniqueness, though these conditions are not equivalent if strict complementarity fails to hold.
A routine is also provided to compute the condition number of an SQLP. The Matlab code calls mex files for improved performance; binaries are available for several platforms. Benchmarks show that the codes provide highly accurate solutions to a wide variety of problems.
-
Ph.D. Thesis
1997
Multiscale Snakes: Resolution-Appropriate Shape Descriptions
Baldwin, Bernard
Abstract
|
PDF
Title: Multiscale Snakes: Resolution-Appropriate Shape Descriptions
Candidate: Baldwin, Bernard
Advisor(s): Geiger, Davi
Abstract:
We present a new type of "snake" in which the dimensionality of the shapes is scaled appropriately for the resolution of the images in which the shapes are embedded. We define shapes as an ordered list of control points and compute the principal components of the shapes in a prior training set. Our energy function is based upon the Mahalanobis distance of a given shape from the mean shape and on the Mahalanobis distance of the image attributes from image attribute values extracted from the training set. We show that the derivative of this energy function with respect to the modal weights is reduced as the image resolution is reduced, and that the derivative of the energy scales with the variance associated with each mode. We exploit this property to determine the subset of the modes which are relevant at a particular level of image resolution, thereby reducing the dimensionality of the shapes. We implement a coarse-to-fine search procedure in the image and shape domains simultaneously, and demonstrate this procedure on the identification of anatomic structures in Computed Tomography images and on the identification of military vehicles in range images.
-
TR1997-748
1997
The coupling of mixed and conforming finite element discretizations
Baratloo, A.;
Karaul, M.; Karl, H.; Kedem, Z. M.
Abstract
|
PDF
Title: The coupling of mixed and conforming finite element discretizations
Author(s): Baratloo, A.; Karaul, M.; Karl, H.; Kedem, Z. M.
Abstract:
While Java and applets have created a new perspective for Web applications, some problems are still unsolved. Among these are the question of how Java applets can find other members of the collaboration session, how to deal with the restrictions imposed by the Java security model, and how to overcome the inability of applets to communicate directly, even if they belong to the same distributed application. KnittingFactory addresses the problem of finding other members of a collaboration session by providing a distributed registry system where the search is performed within a Web browser without violating its security model; the problem of arbitrary placement of applications by providing the core functionality for downloading applets from an arbitrary node; and finally the problem of direct applet-applet communication by using the Java Remote Method Invocation mechanisms to give applets information on how their fellow applets can be reached. Two example applications validate this concept and demonstrate the ease of use of KnittingFactory.
-
TR1997-743
1997
Iterative Substructuring Algorithms for the P-Version Finite Element Method for Elliptic Problems
Bica, I.
Abstract
|
PDF
Title: Iterative Substructuring Algorithms for the P-Version Finite Element Method for Elliptic Problems
Author(s): Bica, I.
Abstract:
In this thesis, we study iterative substructuring methods for linear elliptic problems approximated by the $p$-version finite element method. They form a class of nonoverlapping domain decomposition methods, for which the information exchange between neighboring subdomains is limited to the variables directly associated with the interface, i.e. those common to more than one subregion. Our objective is to design algorithms in $3D$ for which we can find an upper bound for the {\it condition number} $\kappa$ of the preconditioned linear system, which is independent of the number of subdomains and grows slowly with $p$.
Iterative substructuring methods for the $h-$version finite element, and spectral elements have previously been developed and analysed by several authors. However, some very real difficulties remained when the extension of these methods and their analysis to the $p-$version finite element method were attempted, such as a lack extension theorems for polynomials. The corresponding results are well known for Sobolev spaces, but their extension to finite element spaces is quite intricate. In our technical work, we use and further develop extension theorems for polynomials in order to prove bounds on the condition numbers of several algorithms.
We have also made many numerical tests. We can use our programs for several purposes. Not only can we compute the condition numbers and study the rate of convergence for a variety of the algorithms that we have developed, but we can also compute the bounds on these condition numbers, as given by the theory. This is useful because the theory predicts the order of magnitude actual condition numbers.
-
TR1997-733
1997
On the Singular Limit of the Quantum-Classical Molecular Dynamics Model
Bornemann, F. A.;
Schuette, C.
Abstract
|
PDF
Title: On the Singular Limit of the Quantum-Classical Molecular Dynamics Model
Author(s): Bornemann, F. A.; Schuette, C.
Abstract:
In molecular dynamics applications there is a growing interest in so-called mixed quantum-classical models. These models describe most atoms of the molecular system by the means of classical mechanics but an important, small portion of the system by the means of quantum mechanics. A particularly extensively used model, the QCMD model, consists of a singularly perturbed Schrodinger equation nonlinearly coupled to a classical Newtonian equation of motion.
This paper studies the singular limit of the QCMD model for finite dimensional Hilbert spaces. The main result states that this limit is given by the time-dependent Born-Oppenheimer model of quantum theory ---provided the Hamiltonian under consideration has a smooth spectral decomposition. This result is strongly related to the quantum adiabatic theorem. The proof uses the method of weak convergence by directly discussing the density matrix instead of the wave functions. This technique avoids the discussion of highly oscillatory phases.
On the other hand, the limit of the QCMD model is of a different nature if the spectral decomposition of the Hamiltonian happens not to be smooth. We will present a generic example for which the limit set is not a unique trajectory of a limit dynamical system but rather a funnel consisting of infinitely many trajectories.
-
TR1997-753
1997
Overlapping Schwarz Algorithms for Solving Helmholtz's Equation
Cai, X.;
Casarin, M. A., Jr.; Elliot, F. W., Jr.; Widlund, O. B.
Abstract
|
PDF
Title: Overlapping Schwarz Algorithms for Solving Helmholtz's Equation
Author(s): Cai, X.; Casarin, M. A., Jr.; Elliot, F. W., Jr.; Widlund, O. B.
Abstract:
In this paper, prepared for the proceedings of the international conference on domain decomposition held in Boulder, CO in August 1997, we give a progress report on the development of a new family of domain decomposition methods for the solution of Helmholtz's equation.
We present three algorithms based on overlapping Schwarz methods; in our favorite method we proceed to the continuous finite element approximation of the Helmholtz's equation through a sequence of discontinuous iterates. While this is, quite possibly, a new type of overlapping Schwarz methods, we have been inspired to develop this idea by the thesis of Bruno Despr\'{e}s.
-
TR1997-745
1997
Smile consistency - A Memory Consistency Model with User Definable High Level Synchronization Primitives
Chu, C.;
Piatko, P.
Abstract
|
PDF
Title: Smile consistency - A Memory Consistency Model with User Definable High Level Synchronization Primitives
Author(s): Chu, C.; Piatko, P.
Abstract:
We propose a new natural memory consistency model, Smile consistency. Not only does Smile provide an intuitive memory consistency model but also a paradigm in which users can define their own synchronization primitives, called synchronization classes. Programmers can use the synchronization class to ease the programming work related to basic synchronization operations. Therefore, in addition to shared memory, threads can also communicate with each other via synchronization objects, instances of synchronization classes. Programs with high-level synchronization objects may also outperform those with only basic synchronization primitives.
-
TR1997-754
1997
Order of Magnitude Comparisons of Distance
Davis, E.
Abstract
|
PDF
Title: Order of Magnitude Comparisons of Distance
Author(s): Davis, E.
Abstract:
Order of magnitude reasoning --- reasoning by rough comparisons of the sizes of quantities --- is often called "back of the envelope calculation", with the implication that the calculations are quick though approximate. This paper exhibits an interesting class of constraint sets in which order of magnitude reasoning is demonstrably much faster than ordinary quantitative reasoning. Specifically, we present a polynomial-time algorithm that can solve a set of constraints of the form ``Points a and b are much closer together than points c and d.'' We prove that this algorithm can be applied if ``much closer together'' is interpreted either as referring to an infinite difference in scale or as referring to a finite difference in scale, as long as the difference in scale is greater than the number of variables in the constraint set. We also prove the first-order theory over such constraints is decidable.
-
TR1997-738
1997
The Naive Physics Perplex
Davis, E.
Abstract
|
PDF
Title: The Naive Physics Perplex
Author(s): Davis, E.
Abstract:
The ``Naive Physics Manifesto'' of Pat Hayes [1978] proposes a large-scale project of developing a formal theory encompassing the entire knowledge of physics of naive reasoners, expressed in a declarative symbolic form. The theory is organized in clusters of closely interconnected concepts and axioms. More recent work in the representation of commonsense physical knowledge has followed a somewhat different methodology. The goal has been to develop a competence theory powerful enough to justify commonsense physical inferences, and the research is organized in microworlds, each microworld covering a small range of physical phenomena. In this paper we compare the advantages and disadvantages of the two approaches. We also discuss some difficult key issues in automating commonsense physical reasoning.
-
TR1997-732
1997
The On-Line K-Server Problem
Floratos, A.
Abstract
|
PDF
Title: The On-Line K-Server Problem
Author(s): Floratos, A.
Abstract:
We survey the research performed during the last few years on the on-line $k$-server problem over metric spaces. A variety of algorithms are presented \mbox{--- both} deterministic and \mbox{randomized ---} and their performance is studied in the framework of competitive analysis. Restrictions of the problem to special cases of metric spaces are also considered.
-
TR1997-746
1997
Overlapping Schwarz Methods for Vector Valued Elliptic Problems in Three Dimensions
Hiptmair, R.;
Toselli, A.
Abstract
|
PDF
Title: Overlapping Schwarz Methods for Vector Valued Elliptic Problems in Three Dimensions
Author(s): Hiptmair, R.; Toselli, A.
Abstract:
This paper is intended as a survey of current results on algorithmic and theoretical aspects of overlapping Schwarz methods for discrete $\Hcurl$ and $\Hdiv$--elliptic problems set in suitable finite element spaces. The emphasis is on a unified framework for the motivation and theoretical study of the various approaches developed in recent years.
Generalized Helmholtz decompositions -- orthogonal decompositions into the null space of the relevant differential operator and its complement -- are crucial in our considerations. It turns out that the decompositions the Schwarz methods are based upon have to be designed separately for both components. In the case of the null space, the construction has to rely on liftings into spaces of discrete potentials.
Taking the cue from well-known Schwarz schemes for second order elliptic problems, we devise uniformly stable splittings of both parts of the Helmholtz decomposition. They immediately give rise to powerful preconditioners and iterative solvers.
-
TR1997-751
1997
Adaptive Mixed Hybrid and Macro-Hybrid Finite Element Methods
Hoppe, R. H. W.;
Wohlmuth, B.
Abstract
|
PDF
Title: Adaptive Mixed Hybrid and Macro-Hybrid Finite Element Methods
Author(s): Hoppe, R. H. W.; Wohlmuth, B.
Abstract:
In this paper, we consider efficient multilevel based iterative solvers and efficient and reliable a posteriori error estimators for mixed hybrid and macro-hybrid finite element discretizations of elliptic boundary value problems. We give an overview concerning the state-of-the-art techniques for these nonconforming approaches and illustrate the performance of the adaptivity concepts realized by some selected numerical examples.
-
TR1997-752
1997
WebSeal: Web Server Allocation
Karaul, M. H.;
Korilis, Y. A.; Orda, A.
Abstract
|
PDF
Title: WebSeal: Web Server Allocation
Author(s): Karaul, M. H.; Korilis, Y. A.; Orda, A.
Abstract:
With the rapid growth of the World Wide Web, clients attempting to access some popular web sites are experiencing slow response times due to server load and network congestion. Replacing the single server machine with a set of replicated servers is a cost-effective solution to partition server load which also allows incremental scalability and fault transparency. Distributing these replicated servers geographically can reduce network congestion and increase availability. However, distributed web sites are faced with the issue of allocating servers: how do clients find out about the replicas and how do they decide which one to contact? Popular web sites have well publicized server names and require a transparent mapping of the public server name to replicated servers.
Unlike most traditional approaches, we propose a technique which pushes the server allocation functionality onto the client. We argue that this approach scales well and results in increased performance in many cases. Building on theoretical work based on game theory, we show that the usage of individual replicas can be effectively controlled with cost functions even when the clients are noncooperative. We present the design and implementation of WebSeAl, our prototype system realizing these techniques. WebSeAl does not require any changes to existing client and server code, conforms to all standards, and does not generate any control messages. Preliminary experiments utilizing servers on six continents and in controlled settings indicate that WebSeal improves performance significantly while imposing little overhead.
-
TR1997-742
1997
Pincer-Search: A New Algorithm for Discovering the Maximum Frequent Set
Lin, D-I.;
Kedem, Z.
Abstract
|
PDF
Title: Pincer-Search: A New Algorithm for Discovering the Maximum Frequent Set
Author(s): Lin, D-I.; Kedem, Z.
Abstract:
Discovering frequent itemsets is a key problem in important data mining applications, such as the discovery of association rules, strong rules, episodes, and minimal keys. Typical algorithms for solving this problem operate in a bottom-up breadth-first search direction. The computation starts from frequent 1-itemsets (minimal length frequent itemsets) and continues until all maximal (length) frequent itemsets are found. During the execution, every frequent itemset is explicitly considered. Such algorithms perform reasonably well when all maximal frequent itemsets are short. However, performance drastically decreases when some of the maximal frequent itemsets are relatively long. We present a new algorithm which combines both the bottom-up and top-down directions. The main search direction is still bottom-up but a restricted search is conducted in the top-down direction. This search is used only for maintaining and updating a new data structure we designed, the maximum frequent candidate set. It is used to prune candidates in the bottom-up search. As a very important characteristic of the algorithm, it is not necessary to explicitly examine every frequent itemset. Therefore it performs well even when some maximal frequent itemsets are long. As its output, the algorithm produces the maximum frequent set, i.e., the set containing all maximal frequent itemsets, which therefore specifies immediately all frequent itemsets. We evaluate the performance of the algorithm using a well-known benchmark database. The improvements can be up to several orders of magnitude, compared to the best current algorithms.
-
Ph.D. Thesis
1997
Deformable Object Recognition with Articulations and Occlusions
Liu, Tyng-Luh
Abstract
|
PDF
Title: Deformable Object Recognition with Articulations and Occlusions
Candidate: Liu, Tyng-Luh
Advisor(s): Geiger, Davi
Abstract:
The subject of this thesis is deformable object recognition. We concentrate on issues of articulations and of occlusions.
In order to find a target object (undergoing articulations) in an image we use the following procedures: (i) extracting key features in an image, (ii) detecting key points in the model, (iii) efficiently searching through possible image segmentations and (iv) comparing and grouping shapes. Together, they reconstruct the target object in the image. A Bayesian rational is presented to justify this strategy.
Our main focuses in this thesis are on (iii) and (iv). More precisely, we are interested in shape representation, shape similarity and combining shape similarity with image segmentation.
We consider two possible shape representations for an object. The first is given by its shape contour (SC), or silhouette, and the other is described by the structure of symmetry axis (SA), or skeleton, which has a unique free tree structure. For shape similarity, we review a string matching method based on the SC representation and then, we develop a tree matching scheme using the SA-tree representation. The advantage of this approach is that it becomes extremely simple to account for articulations and occlusions. As a novelty, the SA is obtained via a shape comparison between an SC and its mirror version. Finally we study how to integrate the shape module, for both shape representations (SC and SA), with an active contour tracker to yield an image segmentation.
Our efforts through all these issues have been to provide methods that are guaranteed to find optimal solutions.
We also address the topic of occluded object recognition but from a different viewpoint. Our method is to treat it as a function approximation problem with an over-complete basis (a library of image templates), but also accounts for occlusions, where the basis superposition principle is no longer valid. Since the basis is over-complete, there are infinitely many ways to decompose the image. We are motivated to select a sparse/compact representation of the image and to account for occlusions and noise.
-
Ph.D. Thesis
1997
Partial evaluation of concurrent programs
Marinescu, Mihnea
Abstract
|
PDF
Title: Partial evaluation of concurrent programs
Candidate: Marinescu, Mihnea
Advisor(s): Goldberg, Benjamin
Abstract:
The goal of this dissertation is to develop partial evaluation (program specialization) techniques specific to concurrent programs .
The language chosen for this investigation is a very simple CSP-like language. A standard binding-time analysis for imperative languages is conservatively extended in order to deal with the basic concurrent constructs: synchronous communication and nondeterministic choice. Based on the resulting binding-time annotations, a specialization transformation is formally defined using a labeled transition system with actions. The correctness of the partial evaluation is stated and a proof is included. This result is closely related to (strong) bisimulation , the equivalence relation on transition systems. We name the two directions of the bisimulation equivalence soundness and completeness respectively.
In order to maintain a clear presentation, this simple specialization algorithm addresses only the data transfer component of the communication; a post-specialization analysis for the detection and removal of redundant synchronizations (i.e. synchronizations whose removal does not increase the nondeterminism of a program) is presented separately. This redundant-synchronization analysis is based on the characterization of dependencies in a CSP-like language.
Several pragmatic issues such as improving the binding-time analysis, controlling loop unrolling and the consequences of lifting nondeterminism from run-time to specialization-time are discussed. Two additional binding-time analyses are presented. We call one of them speculative because the specialization transformation based on it is sound but not complete. We call the other one extended because it includes an on-line redundant-synchronization analysis.
The relationship between partial evaluation and different types of fairness is also studied. In order to deal with a wide range of fair run-time systems, ranging from strong to weak, and from process-fair to channel-fair and communication-fair, we use a general operational framework for specifying fairness properties as systematic means of reducing nondeterminism. We then prove the correctness (as bisimulation equivalence) or just the soundness of specialization transformations under various binding-time analyses.
Throughout the dissertation, the power of the newly developed techniques is shown in several examples.
-
M.S. Thesis
1997
Real/Expr: Implementation of an Exact Computation Package
Ouchi, Kouji
Abstract
|
PDF
Title: Real/Expr: Implementation of an Exact Computation Package
Candidate: Ouchi, Kouji
Advisor(s): Yap, Chee
Abstract:
The Real/Expr package is a C++ project to support the precision-driven approach to exact computation of geometric algorithms. The package is built on top of the class Real that encompasses a variety of numerical representations. The class Expr captures a set of algebraic expressions on which any comparison can be done precisely.
The software libraries described here are available via the Web page http://simulation.nyu.edu/projects/exact/.
-
TR1997-739
1997
A Uniform Framework for Ordered Restriction Map Problems
Parida, L.
Abstract
|
PDF
Title: A Uniform Framework for Ordered Restriction Map Problems
Author(s): Parida, L.
Abstract:
Optical Mapping is an emerging technology for constructing ordered restriction maps of DNA molecules. The underlying computational problems for this technology have been studied and several cost functions have been proposed in recent literature. Most of these propose combinatorial models; one of them also presents a probabilistic approach. However, it is not {\em a priori} clear as to how these cost functions relate to one another and to the underlying problem. We present a uniform framework for the restriction map problems where each of these various models is a specific instance of the basic framework. We achieve this by identifying the following approaches to the ordered restriction map problem: (1) using data consensus or agreement, and, (2) optimizing a characteristic function of the data. Our framework also opens up the possibility of exploring other cost functions. An additional feature is that we not only integrate the combinatorial models but also analyze the probabilistic model within the same framework. %Finally, for completeness, we include i brief survey of %the best known complexity results for these problems. Finally, we indicate the open problems by including a survey of the best known complexity results for these problems.
-
TR1997-740
1997
Inapproximability of Flip-Cut, Shift-Cut and Other problems from Optical Mapping
Parida, L.
Abstract
|
PDF
Title: Inapproximability of Flip-Cut, Shift-Cut and Other problems from Optical Mapping
Author(s): Parida, L.
Abstract:
Optical Mapping is an emerging technology for constructing ordered restriction maps of DNA molecules. The study of the complexity of the problems arising in Optical Mapping has generated considerable interest amongst computer science researchers. In this paper we examine the complexity of these problems.
Optical Mapping leads to various computational problems such as the Binary Flip Cut (BFC) problem, the Weighted Flip Cut (WFC) problem the Exclusive Binary Flip Cut (EBFC) problem \cite{parida1, parida2}, the Binary Shift Cut (BSC) problem, the Binary Partition Cut (BPC) problem and others. The complexity and the hardness of the BFC problem, the WFC problem were not known. Using the technique of {\em gap-preserving} reduction of the max-cut problem, we show that BFC and WFC problems are MAX SNP-hard and achieving an approximation ratio $1-\Upsilon/7$ for these problems is NP-hard, where $\Upsilon$ denotes the upper bound on the polynomial time approximation factor of the well-known max cut problem. A slight variation of BFC, BFC$_{\max K}$, had been shown to be NP-hard; we improve the result to show that BFC$_{\max K}$ is MAX SNP-hard and achieving an approximation ratio $(1-\Upsilon/7)\frac{p_{max}}{p_{min}}$ for BFC$_{\max K}$ is NP-hard, where $p_{\min}$ and $p_{\max}$ are the minimum and maximum of the digestion rates in the given problem. The EBFC problem was shown to be NP-Complete; improve this result to show that EBFC is MAX SNP-hard and achieving an approximation ratio $1-\Upsilon/7$ for EBFC is NP-hard. However, a dense instance of the EBFC problem does have a PTAS.
The Binary Partition Cut (modeling spurious molecules) problem has been shown to be NP-Complete: we show, in this paper, that a (reasonable) unrestrained version of it has an efficient polynomial time algorithm. A variation of the Binary Shift Cut (modeling missing fragments) BSC$_{\max K}$, had been shown to be NP-hard \cite{Tom}; we show both the versions of this problem to be MAX SNP-hard and achieving an approximation ratio $1-\Upsilon/6$ for BSC and a ratio $(1-\Upsilon/6)\frac{p_{max}}{p_{min}}$ for BSC$_{\max K}$ is NP-hard. In addition, we show that $d$-wise Match ($d$M) problem is MAX SNP-hard and achieving an approximation ratio $1-\Upsilon$ is NP-hard.
-
TR1997-741
1997
Junctions: Detection, Classification and Reconstruction
Parida, L.;
Geiger, D.; Hummel, R.
Abstract
|
PDF
Title: Junctions: Detection, Classification and Reconstruction
Author(s): Parida, L.; Geiger, D.; Hummel, R.
Abstract:
Junctions are important features for image analysis and form a critical aspect of image understanding tasks such as object recognition. We present a unified approach to detecting (location of the center of the junction), classifying (by the number of wedges -- lines, corners, $3$-junctions such as $T$ or $Y$ junctions, or $4$-junctions such as $X$-junctions) and reconstructing junctions (in terms of radius size, the angles of each wedge and the intensity in each of the wedges) in images. Our main contribution is a modeling of the junction which is complex enough to handle all these issues and yet simple enough to admit an effective dynamic programming solution. Broadly, we use a template deformation framework along with a gradient criterium to detect radial partitions of the template. We use the Minimum Description Length (MDL) principle to obtain the optimal number of partitions that best describes the junction.
Kona is an implementation of this model. We (quantitatively) demonstrate the stability and robustness of the detector by analyzing its behavior in the presence of noise, using synthetic/controlled apparatus. We also present a qualitative study of its behavior on real images.
-
TR1997-744
1997
Iterative Substructuring Methods for Spectral Element Discretizations of Elliptic Systems. I: Compressible Linear Elasticity
Pavarino, L. F.;
Widlund, O. B.
Abstract
|
PDF
Title: Iterative Substructuring Methods for Spectral Element Discretizations of Elliptic Systems. I: Compressible Linear Elasticity
Author(s): Pavarino, L. F.; Widlund, O. B.
Abstract:
An iterative substructuring method for the system of linear elasticity in three dimensions is introduced and analyzed. The pure displacement formulation for compressible materials is discretized with the spectral element method. The resulting stiffness matrix is symmetric and positive definite.
The method proposed provides a domain decomposition preconditioner constructed from local solvers for the interior of each element, and for each face of the elements and a coarse, global solver related to the wire basket of the elements. As in the scalar case, the condition number of the preconditioned operator is independent of the number of spectral elements and grows as the square of the logarithm of the spectral degree.
-
TR1997-755
1997
Iterative Substructuring Methods for Spectral Element Discretizations of Elliptic Systems. II: Mixed Methods for Linear Elasticity and Stokes Flow
Pavarino, L. F.;
Widlund, O. B.
Abstract
|
PDF
Title: Iterative Substructuring Methods for Spectral Element Discretizations of Elliptic Systems. II: Mixed Methods for Linear Elasticity and Stokes Flow
Author(s): Pavarino, L. F.; Widlund, O. B.
Abstract:
Iterative substructuring methods are introduced and analyzed for saddle point problems with a penalty term. Two examples of saddle point problems are considered: the mixed formulation of the linear elasticity system and the generalized Stokes system in three dimensions. These problems are discretized with mixed spectral element methods. The resulting stiffness matrices are symmetric and indefinite. The unknowns interior to each element are first implicitly eliminated by using exact local solvers. The resulting saddle point Schur complement is solved with a Krylov space method with block preconditioners. The velocity block can be approximated by a domain decomposition method, e.g., of wire basket type, which is constructed from local solvers for each face of the elements, and a coarse solver related to the wire basket of the elements. The condition number of the preconditioned operator is independent of the number of spectral elements and is bounded from above by the product of the square of the logarithm of the spectral degree and the inverse of the discrete inf-sup constant of the problem.
-
TR1997-747
1997
Iterative Substructuring Methods for Spectral Element Discretizations of Elliptic Systems in Three Dimensions
Pavarino, L. F.;
Widlund, O. B.
Abstract
|
PDF
Title: Iterative Substructuring Methods for Spectral Element Discretizations of Elliptic Systems in Three Dimensions
Author(s): Pavarino, L. F.; Widlund, O. B.
Abstract:
Spectral element methods are considered for symmetric elliptic systems of second-order partial differential equations, such as the linear elasticity and the Stokes systems in three dimensions. The resulting discrete problems can be positive definite, as in the case of compressible elasticity in pure displacement form, or saddle point problems, as in the case of almost incompressible elasticity in mixed form and Stokes equations. Iterative substructuring algorithms are developed for both cases. They are domain decomposition preconditioners constructed from local solvers for the interior of each element and for each face of the elements and a coarse, global solver related to the wire basket of the elements. In the positive definite case, the condition number of the resulting preconditioned operator is independent of the number of spectral elements and grows at most in proportion to the square of the logarithm of the spectral degree. For saddle point problems, there is an additional factor in the estimate of the condition number, namely, the inverse of the discrete inf-sup constant of the problem.
-
Ph.D. Thesis
1997
Pricing and Hedging Volatility Risk in Interest-Rate Derivatives
Porras, Juan
Abstract
|
PDF
Title: Pricing and Hedging Volatility Risk in Interest-Rate Derivatives
Candidate: Porras, Juan
Advisor(s): Avellaneda, Marco
Abstract:
This work addresses the problem of pricing interest-rate derivative securities and the use of quoted prices of traded instruments to calibrate the corresponding interest-rate dynamics. To this end, an arbitrage-free model of interest rate evolution is adopted, for which the local drift will depend on the history of volatility, thus leading to path-dependent pricing. This model is based on the Heath-Jarrow-Morton formulation but, in addition, presupposes that the volatility process is not defined a-priori . This leads to a path-dependent model that can be formulated in a Markovian framework by considering additional state-variables and hence increasing the dimensionality of the computation. Instead of solving the resulting 3-dimensional partial differential equation, an alternative approach, based on conditional expectations of the history of volatility, is taken. This pricing method is applied to a non-linear (adverse volatility) setting, and used as the core of a non-parametric model calibration technique. The algorithm, by performing an optimization over volatility surfaces, finds a volatility surface that matches the market prices of a given set of securities. This method also finds a hedge for volatility risk, using derivative securities as hedging instruments. In particular, we present results obtained for the problem of hedging American swaptions (options on interest-rate swaps) using European swaptions.
The conditional expectation approach is explored further, and found to be of interest in its own right for the pricing of several kinds of path-dependent instruments, providing an alternative to increasing state-space dimension in order to satisfy a Markov property. In particular, we show how this method speeds up the computation of prices for some types of exotic options, while being general enough to apply to both linear and non-linear pricing of portfolios.
-
Ph.D. Thesis
1997
Performance Modeling for Realistic Storage Devices
Shriver, Elizabeth
Abstract
|
PDF
Title: Performance Modeling for Realistic Storage Devices
Candidate: Shriver, Elizabeth
Advisor(s): Siegel, Alan; Wilkes, John
Abstract:
Managing large amounts of storage is difficult and becoming more so as both the complexity and number of storage devices are increasing. One approach to this problem is a self-managing storage system . Since a self-managing storage system is a real-time system, it requires a model that quickly approximates the behavior of the storage device in a workload-dependent fashion. We develop such a model.
Our approach to modeling devices is to model the individual components of the device, such as queues, caches, and disk mechanisms, and then compose the components. To determine the performance of a component, each component modifies the entering workload use patterns and determines the performance from the workload use patterns and the lower-level device behavior. For example, modifying the use patterns allows us to capture the altered spatial locality that occurs when queues reorder their requests.
Our model predicts the device behavior in terms of response time within a 8% relative error for an interesting subset of the domain of devices and workloads. To demonstrate this, the model has been validated with synthetic traces of parallel scientific file system applications and traces of transaction processing applications.
Our contributions to the area of performance modeling for storage devices include the following:
- 1.
- Methods to approximate the positioning time for the disk head of a magnetic disk.
- 2.
- Methods to approximate the queue delay for non-FCFS scheduling algorithms.
- 3.
- Methods to approximate the cache-miss probabilities and the full and partial cache-hit probabilities in the data caches in the I/O path using measures of workload spatial locality.
- 4.
- Methods to approximate the mean seek time and rotational latency of the disk mechanism using measures of workload spatial locality.
- 5.
- An infrastructure for developing a composite model. The infrastructure supports the development of more complicated devices and workloads than we have validated.
Together, these mean that we have analytic methods to approximate the behavior of a set of realistic storage devices.
-
TR1997-736
1997
Overlapping Schwarz Methods for Maxwell's Equations in Three Dimensions
Toselli, A.
Abstract
|
PDF
Title: Overlapping Schwarz Methods for Maxwell's Equations in Three Dimensions
Author(s): Toselli, A.
Abstract:
Two-level overlapping Schwarz methods are considered for finite element problems of 3D Maxwell's equations. Nedelec elements built on tetrahedra and hexahedra are considered. Once the relative overlap is fixed, the condition number of the additive Schwarz method is bounded, independently of the diameter of the triangulation and the number of subregions. A similar result is obtained for a multiplicative method. These bounds are obtained for quasi-uniform triangulations. In addition, for the Dirichlet problem, the convexity of the domain has to be assumed. Our work generalizes well-known results for conforming finite elements for second order elliptic scalar equations.
-
TR1997-750
1997
The Coupling of Mixed and Conforming Finite Element Discretizations
Wieners, C.;
Wohlmuth, B.
Abstract
|
PDF
Title: The Coupling of Mixed and Conforming Finite Element Discretizations
Author(s): Wieners, C.; Wohlmuth, B.
Abstract:
In this paper, we introduce and analyze a special mortar finite element method. We restrict ourselves to the case of two disjoint subdomains, and use Raviart-Thomas finite elements in one subdomain and conforming finite elements in the other. In particular, this might be interesting for the coupling of different models and materials. Because of the different role of Dirichlet and Neumann boundary conditions a variational formulation without a Lagrange multiplier can be presented. It can be shown that no matching conditions for the discrete finite element spaces are necessary at the interface. Using static condensation, a coupling of conforming finite elements and enriched nonconforming Crouzeix-Raviart elements satisfying Dirichlet boundary conditions at the interface is obtained. The Dirichlet problem is then extended to a variational problem on the whole nonconforming ansatz space. It can be shown that this is equivalent to a standard mortar coupling between conforming and Crouzeix-Raviart finite elements where the Lagrange multiplier lives on the side of the Crouzeix-Raviart elements. We note that the Lagrange multiplier represents an approximation of the Neumann boundary condition at the interface. Finally, we present some numerical results and sketch the ideas of the algorithm. The arising saddle point problems is be solved by multigrid techniques with transforming smoothers.
-
TR1997-749
1997
Hierarchical A Posteriori Error Estimators for Mortar Finite Element Methods with Lagrange Multipliers
Wohlmuth, B.
Abstract
|
PDF
Title: Hierarchical A Posteriori Error Estimators for Mortar Finite Element Methods with Lagrange Multipliers
Author(s): Wohlmuth, B.
Abstract:
Hierarchical a posteriori error estimators are introduced and analyzed for mortar finite element methods. A weak continuity condition at the interfaces is enforced by means of Lagrange multipliers. The two proposed error estimators are based on a defect correction in higher order finite element spaces and an adequate hierarchical two-level splitting. The first provides upper and lower bounds for the discrete energy norm of the mortar finite element solution whereas the second also estimates the error for the Lagrange multiplier. It is shown that an appropriate measure for the nonconformity of the mortar finite element solution is the weighted $L^2$-norm of the jumps across the interfaces.