Theses & Reports
Instructions for submitting a technical report or thesis.
You can find technical reports published prior to 1990 archived here.
-
Ph.D. Thesis
1987
A Decision Procedure for a Class of Unquantified Formulae of Set Theory Involving the Powerset and Singleton Operators
Cantone, Domenico A.
Abstract
|
PDF
Title: A Decision Procedure for a Class of Unquantified Formulae of Set Theory Involving the Powerset and Singleton Operators
Candidate: Cantone, Domenico A.
Advisor(s): Schwartz, Jacob T.
Abstract:
The class of unquantified formulae of set theory involving Boolean opeators, the powerset and the singleton operators, and the equality and membership predicates is shown to have a solvable satisfiability problem. It is also shown that whenever a formula (phi) in the above class is satisfiable there exists a hereditarily finite model of (phi), where rank is bounded by a doubly exponential expression in the number of variables occurring in (phi).
-
Ph.D. Thesis
1987
Tape Reversal and Parallel Time
Chen, Jianer
Abstract
|
PDF
Title: Tape Reversal and Parallel Time
Candidate: Chen, Jianer
Advisor(s): Yap, Chee; Gross, Jonathan
Abstract:
Recent research has shown an intimate relationship between reversal complexity on multitape Turing machines and parallel computation time. In this dissertation, we systematically study the structural properties of these two important complexity measures and the relationship between them. We develop some basic techniques necessary for establishing analogues of well-known theorems on space and time complexity. We give a linear simulation of deterministic space by deterministic reversal on multitape Turing machines and the first known tape reduction theorem for reversal complexity. As applications of the tape reduction theorem, we prove a hierarchy theorem and show the existence of complete languages for reversal complexity. The relationship between reversal and tape is also discussed. We show that with respect to reversal complexity there is an intrinsic difference between 1-tape and 2-tape Turing machines. More precisely, we show that in deterministic case, 2-tape Turing machines can simulate k-tape Turing machines with only a polynomial (quadratic) increase of reversals while 1-tape Turing machines do not have such a property if $P \not= PSPACE;$ in nondeterministic case, reversal complexity is too powerful to be a complexity measure on 2-tape Turing machines but on 1-tape Turing machines it is a reasonable complexity measure which is linearly related to the space complexity. For parallel computation, we introduce the concepts of deterministic, nondeterministic and oracle circuits in a very natural way. Based on our model of oracle circuits, we build up a log-depth hierarchy in parallel computation, and show that our hierarchy corresponds exactly to the well-known NC hierarchy. From this point of view, some structural properties of the NC hierarchy are discussed. Log-depth many-one reducibility and log-depth Turing reducibility are discussed. Several new complete languages for the class of deterministic log-space languages are presented. Finally, we give the detail proofs of the polynomial relationship between reversal complexity on multitape Turing machines and parallel time complexity on uniform circuits. (Some of these proofs have been outlined by Pippenger.)
-
Ph.D. Thesis
1987
The use of Data Flow Information for the Selection and Evaluation of Software Test Data
Frankl, Phyllis G.
Abstract
|
PDF
Title: The use of Data Flow Information for the Selection and Evaluation of Software Test Data
Candidate: Frankl, Phyllis G.
Advisor(s): Weyuker, Elaine
Abstract:
Two families of software test data adequacy criteria, each based on data flow analysis, are defined for programs written in Pascal. Their formal properties are investigated and interactive software testing tools based on them are described. The first of these families, the data flow testing criteria, was previously defined for programs written in a simple language. We extend the definitions to apply to programs written in Pascal. The data flow testing criteria are based purely on the syntax of the program being tested. They require that the test data execute certain paths from program points at which variables are defined to program points at which those definitions are used. We describe the design and implementation of a software testing tool, ASSET, based on the data flow testing criteria. A serious weakness of the data flow testing criteria is that for some programs there exists no set of test data which is adequate for testing the program according to these criteria. This problem arises due to unexecutable paths in the program. The second family of criteria, the feasible data flow testing criteria, circumvent this problem by eliminating from consideration those definition-use associations which can never be exercised. We show that certain formal properties of the feasible data flow testing criteria differ significantly from those of the data flow testing criteria. Since it is undecidable whether a given set of test data satisfies a given feasible data flow testing criterion, feasible data flow testing cannot be fully automated. However, it can be partially automated. We describe a heuristic method, the path expression method, which attempts to determine whether a given definition-use association can be exercised. The path expression method is based on a combination of data flow analysis and symbolic evaluation. We introduce a new symbolic evaluation technique which is more general, but essentially no more expensive, than symbolic execution. The path expression method, along with ASSET, constitute a tool which partially automates feasible data flow testing.
-
Ph.D. Thesis
1987
Control and Task Planning for a Four Finger Dextrous Manipulator
Hor, Maw-Kae
Abstract
|
PDF
Title: Control and Task Planning for a Four Finger Dextrous Manipulator
Candidate: Hor, Maw-Kae
Abstract:
Various attempts have been made to build a dextrous hand and to study the control and planning issues involved in dextrous manipulation. However, in many practical situations, the following problems make the real time control and planning of dextrous manipulation very difficult: (1) the discrepancy between the model and reality (for example, imprecise knowledge of inertia, friction, and the geometric dimensions), (2) the inadequacy of the control theory used in controlling a highly non-linear manipulator, (3) the numerous computations required in the dynamic and kinematic calculations, and (4) the lack of abstract level manipulation primitives. This thesis investigates several issues in relation to dextrous manipulation and control. We designed and built a planar manipulator, the Four Finger Manipulator, for studying of the dextrous manipulation. We also developed a prototype software structure for multi-finger manipulators. Models for quasi-static control and real time calculation are presented which make the real time control possible. Heuristics are described for: (a) choosing the finger gripping forces of a force controlled adaptive frictional grasp, (b) estimating the trajectory in compliant motions, and (c) coordinating finger groups to perform tasks that require multiple finger groups. A set of manipulation primitives and algorithms have been developed on the Four Finger Manipulator. Successful performance is demonstrated for various tasks.
-
Ph.D. Thesis
1987
An Analyzer for the Information Content of Sentences (Semantics)
Johnson, Stephen Bennett
Abstract
|
PDF
Title: An Analyzer for the Information Content of Sentences (Semantics)
Candidate: Johnson, Stephen Bennett
Advisor(s): Sager, Naomi
Abstract:
An algorithm is presented which produces a representation of the information content of sentences as a tree of operator words predicating on argument words. The Sentence Analyzer employs a new type of formal grammar which describes the surface syntax of sentences, grammatical constraints, and the operator-argument relations underlying the surface forms. The algorithm works left to right, first obtaining the operator-argument representations of words from a lexicon, and then applying grammar rules to construct operator-argument subtrees over longer and longer segments of the sentence. All alternate analyses are developed simultaneously. The grammar rules are based on the detailed mathematical grammar of Zellig Harris, termed here Composition-Reduction Grammar, in which sentences are generated by a process of operator words entering on argument words. As words enter, this tree structure is linearized. Various reductions may apply to words which are redundant in the operator-argument structure, producing variations such as morphological changes, and the dropping of words from the sentence. Reduction yields sentences with a more compact form, the form we see, while preserving the objective information content. The fundamental unit of the formal grammar developed here is the descriptor, a tuple of six attributes, which represents an operator-argument word class. A descriptor is similar to traditional word classes like nouns and verbs, but can carry information specific to an individual word to form an entry in the lexicon. More importantly, descriptors can replace the use of symbols for phrases in traditional grammar. This is because a descriptor can stand for the entire word sequence spanned by the operator-argument subtree of which it is the root. This feature enables the grammar rules to be specified as a relation between two descriptors whose subtrees span adjacent word sequences. The two words related by a rule either have a simple operator-argument relation, or a more complex operator-argument relation made compact by reduction. The result is a formal grammar in which all relations are between words, with sufficient power for the Sentence Analyzer to perform a direct analysis of sentences into their informational relations, without recourse to intricate transformational procedures.
-
Ph.D. Thesis
1987
Description of Shape using Orientation and Propagation Flow
Menczel, Yaron
Abstract
|
PDF
Title: Description of Shape using Orientation and Propagation Flow
Candidate: Menczel, Yaron
Abstract:
A new theory for the partition of an image into its syntactical primitives is introduced. The method uses edge segments and their orientation to mark an image with useful syntactical information. The marking is done by defining a flow initiating from the boundary and propagating inward into the shape. Three algorithms are introduced. The first sends flow waves in a direction perpendicular to the edges into the object. The second algorithm is an iterative version of the first algorithm, with the addition that an edge detector is constantly applied on the growing object. The third labels the edges with their orientation and then iteratively applies a majority vote selection to spread the orientation with unlabeled pixels inactive in the voting process. The propagation is moderated by a number of heuristics that ensure local and global support within the flow. The flow carries orientation data and spreads the information to all interior pixels. A connected component algorithm based on orientation is then used to construct segments of uniform orientation. These segments constitute the basis of a structural description. The new approach is compared to other methods of segmentation and representation of shapes. These other methods are not always capable of explaining human perception of shapes in a uniform and unique way. Methods that are designed to deal with simple perceptual domains are not capable of dealing with occlusion, texture, touching bodies, and subjective contours. In contrast, this new proposal is shown to work with simple figures as well as more real world complex images. Several examples are given to show the usefulness of the approach. In particular, we give an implementation of a system that performs automatic character recognition based on this method.
-
Ph.D. Thesis
1987
Generic: a Programming Language for Vlsi Layout and Layout Manipulation
Solworth, Jon A.
Abstract
|
PDF
Title: Generic: a Programming Language for Vlsi Layout and Layout Manipulation
Candidate: Solworth, Jon A.
Abstract:
We describe a programming language, GENERIC (GENERation of Integrated Circuits) for producing high-quality, general-purpose layout of custom integrated circuits. Unlike other VLSI programming languages, in GENERIC, existing layouts can be manipulated by the VLSI operators to produce new layouts. The design of a layout in GENERIC starts with a circuit description which contains the active components and electrical nets. The circuit description (sometimes called an abstract layout) is then transformed into a realizable layout by the application of VLSI operators. These operators are both design-rule safe and wire connectivity maintaining. Built-in operations include relative placement, primitive compaction, and orientation. A novel mechanism called planes is described, which for the first time enables non-design rule violating topological manipulations. GENERIC forms the kernal of a VLSI design system. We also describe the cell library, Flexcell which contains parameterized and modifiable cells. Cells in the Flexcell library are created using cell generators, but unlike traditional cell generators, the layout generated need not exhibit a high degree of regularity. For each cell, a number of templates are provided, which encode known good layout schemes. Cells created with a template can then be modified using utilities written in GENERIC. Hence, Flexcell provides highly optimized cells which can be reused in many different environments.
-
Ph.D. Thesis
1987
A Theory of Concurrent Programs and Test Data Adequacy
Weiss, Stewart Neil
Abstract
|
PDF
Title: A Theory of Concurrent Programs and Test Data Adequacy
Candidate: Weiss, Stewart Neil
Abstract:
We establish a general framework for the investigation of concurrent program-based adequacy criteria and we extend notions of program-based test data adequacy to the domain of concurrent programs. This work is consistent with the testing theory proposed by Gourlay and the axiomatization of test data adequacy proposed by Weyuker. Our method is to define a representation of concurrent programs which is particularly suited to the study of the problems of concurrent program testing, and which serves as a model for an extension of a theory of testing to such programs. Our framework also provides the basis for a practical testing tool for concurrent programs. We prove theoretical results concerning various properties of our representation of concurrent programs, among which are notions of completeness, consistency, and computability. We propose approximate solutions to some of the undecidable problems which we encounter. We demonstrate that our theory of concurrent program testing may be used to assess the complexity and reliability of various adequacy criteria for testing concurrent programs. We use our model to investigate and compare concurrent program based adequacy criteria derived from a subclass of structural coverage criteria including a large family of data flow criteria. Finally, we propose practical methods of using our framework as an aid to concurrent program testing.