Instructions for submitting a technical report or thesis.
Title: Stochastic Solutions to the Schroedinger Equation for Fermions
Candidate: Arnow, David Moss
Abstract:
An exact stochastic method has been developed for generating the antisymmetric eigensolution of lowest index and its associated eigenvalue for the Schrodinger wave equation in 3N dimensions. The method is called the Green's function Monte Carlo method for fermions (FGFMC) because it is based on a Monte Carlo solution to the integral form of the Schrodinger equation (using Green's function) and because it is the fermion class of particles in physics which require antisymmetric solutions. The solution consists of two sets of 3N-dimensional points, {R(,j)('+)} and {R(,j)('-)}, distributed by density functions (psi)('+) and (psi)('-), whose difference, (psi)('+)-(psi)('-), is proportional to the eigensolution, (psi)(,F). These sets may be used to estimate integrals of the form (DIAGRAM, TABLE OR GRAPHIC OMITTED...PLEASE SEE DAI) where R = (x(,1),...,x(,3N)) and where f(R) and g(R) are antisymmetric functions. By setting g(R) to (psi)(,T)(R) and f(R) to H(psi)(,T)(R), where (psi)(,T) is an antisymmetric trial wave function satisfying the boundary conditions, E(,F) is obtained. The method is exact because the only sources of error are variance and bias, both of which can be estimated and reduced, either by employing larger sample sizes, or by reconstructing the sampling procedure in ways that make greater use of our understanding of the problem (importance sampling). There are no physical or mathematical approximations other than the statistical one. The crux of the method is a sampling procedure which constructs the two sets of points in linear time (as a function of accuracy). Earlier methods were exponential in cost. The FGFMC method is successfully applied to a one dimensional problem and a nine dimensional problem, the results of which are presented here. These results demonstrate that this method can be successfully applied to small physical problems on medium-scale computing machines. The key to this success was the transformation of the problem from exponential to linear cost as a function of accuracy. The strong dependence on dimensionality, however, currently results in an exponential cost as a function of problem size, and this, until overcome, imposes a servere barrier to calculations on large systems.
Title: Synchronization Efficiency
Candidate: Borg, Anita
Abstract:
A generally applicable methodology for the analysis of synchronization efficiency is introduced. It is based upon the assumption that synchronization is required because of the need to control the use of resources by concurrent processes. Two aspects of synchronization efficiency are identified: Time efficiency, and accuracy efficiency. Time efficiency provides a measure of the use of resources during synchronization. Accuracy efficiency specifies how well a solution to a synchronization problem supports the rules of a problem. The methodology involves the simulation of solutions to synchronization problems as greater and greater implementation detail is specified. The assumptions made concerning the execution times of operations, especially synchronization operations, is seen to be crucial to the correct analysis of synchronization efficiency. It is argued that the only reasonable assumption for the execution times of synchronization operations, when their implementation is left unspecified, is that they execute instantaneously. However, it is also shown that this assumption must be used with care in order to avoid erroneous conclusions. The methodology is applied to PV, Monitor, and ADA solutions to the mutual exclusion, reader-writer, and consumer-producer problems. The PV solutions were usually the most efficient, while the ADA solutions were found to be the least efficient. It is also shown that no single characteristic of a solution determines its efficiency. However, the primary characteristics affecting efficiency are shown to be: (1) The execution time required for synchronization. (2) The rules for execution of the synchronizating computations. (3) The amount of competition among processes. (4) The amount and cost of process switching required during synchronization. It is the interaction of these factors which determine synchronization efficiency.
Title: Circle Graphs
Candidate: Buckingham, Mark Alan
Advisor(s): Golumbic, Martin
Abstract:
From a circle with chords we may derive a graph whose nodes correspond to chords and whose edges correspond to intersecting chords. Such a graph is called a circle graph. After numbering the endpoints of the chords such that two endpoints are numbered the same iff the endpoints belong to the same chord, we form a circle graph sequence by reading off these numbers going around the outside of the circle. Circle graph sequences are often used to prove properties of circle graphs. In this dissertation we discuss many mathematical and algorithmic aspects of circle graphs. The number of different circle with chords representations that yield a chordless path is given. The property that a circle with chords is connected (that is, its derived circle graph is connected) and the property that a circle with chords has two separated chords (that is, two chords that cannot both be intersected by a third chord without the third intersecting a fourth chord) are described in terms of circle graph sequences. They are found to be dual to one another. An incomplete forbidden subgraph characterization of circle graphs is also presented. An important result of this dissertation is that the Berge Strong Perfect Graph Conjecture is shown to hold for the class of circle graphs. Many properties of p-critical graphs and partitionable graphs are given, most with simplified proofs. Some new results are presented and a new, very simple proof of the Berge Conjecture for K(,1,3)-free graphs is put forward. Very efficient algorithms for finding maximum (weighted) cliques and maximum (weighted) stable sets of the derived circle graph of a circle graph sequence are given. We find an O(e*log(,2)(omega)) algorithm for the unweighted clique problem, an O((delta)e) algorithm for the weighted clique problem and an O(c) algorithm for the weighted stable set problem; where e is the number of edges in the graph, the maximum clique size, (delta) the maximum degree and c the number of occurrences of an interval being completely contained in another interval in the circle graph sequence. Some open problems for further research are listed.
Title: Decision Procedures for some Classes of Unquantified Set Theoretic Formulae
Candidate: Ferro, Alfredo
Advisor(s): Schwartz, Jacob T.; Mammana, Carmelo
Abstract:
We consider the first order language consisting of = (equality), (ELEM) (membership), (UNION) (binary union), (INTERSECT) (binary intersection), (FDIAG) (set difference), and pow (powerset former). We show that the class of all universal sentences of this language is decidable, provided that we impose the strong restriction that at most two terms appear as arguments of the powerset former. As a preliminary result we show that the class of all universal sentences in the above language extended by allowing infinitely many constants: one for each hereditarily finite set, is decidable provided that we allow only a single occurrence of the powerset former.
Title: A Transformational Framework for Automatic Derived Data Control and its Applications in an Entity-Relationship Data Model
Candidate: Koenig, Shaye
Abstract:
This thesis investigates the specification, implementation and application of derived data in the context of MADAM, an entity-relationship oriented, map-based data model/programming language for database conceptual schema representation and processing. The data representation and manipulation facilities of MADAM, described in chapter 2; represent a synthesis of ideas from the areas of very high level languages, in particular SETL, and the binary association and entity-relationship approaches to data modeling. Derived data refers to data that appears to exist in its declared form, but is actually derived from related data in the database. Previous approaches to the materialization of derived data have been based on a global recalculation strategy in which derived data is recomputed whenever it is referenced. In this thesis we present an alternative approach in which derived data is explicitly stored and incrementally maintained. In chapter 3, we describe the definition of derived data in MADAM; discuss its importance as a means of fostering logical data independence, providing access control mechanisms, and supporting semantic relativism; and present a unified framework for the automatic maintenance of derived data. This framework is based on the transformational techniques of finite differencing in which repeated costly computations are replaced by more efficient incremental counterparts. In addition to the importance of our incremental maintenance approach for supporting alternative views of the same data, additional applications of our incremental maintenance approach to the implementation of summary data, integrity control, and triggers are discussed in chapter 4.
Title: Upper and Lower Bounds on the Performance of Parallel Algorithms
Candidate: Kruskal, Clyde Philip
Advisor(s): Schwartz, Jacob T.
Abstract:
With the advent of VLSI, new opportunities in computer architecture are emerging. Parallel processors composed of many thousands of PEs will soon be practical. In this thesis, we derive both upper and lower bounds for parallel algorithms. Our analyses emphasize two specific models of parallel computation--the ultracomputer and the paracomputer--but the general ideas and many of the results are much more widely applicable. We present general lower bounds for solving a wide class of problems on direct connection machines, and a sharper lower bound for effecting permutations. This latter bound shows that the permutation problem is not completely parallelizable on any direct connection machine that is not almost completely connected. In addition, using a very general model of parallel computation, we study the worst case time complexity of searching in parallel. We then present a large collection of basic algorithms for both the ultracomputer and the paracomputer. Since the performances of many of these algorithms achieve the lower bounds mentioned above, both models are extremely effective parallel computer systems. Finally, a systematic method for generalizing any dependent-size algorithm to an independent-size one is given.