Theses & Reports
Instructions for submitting a technical report or thesis.
You can find technical reports published prior to 1990 archived here.
-
Ph.D. Thesis
1999
Higher-Order Conditional Synchronization
Afshartous, Niki
Abstract
|
PDF
Title: Higher-Order Conditional Synchronization
Candidate: Afshartous, Niki
Advisor(s): Goldberg, Benjamin
Abstract:
Conditional synchronization - a mechanism that conditionally blocks a thread based on the value of a boolean expression currently exists in several programming languages. We propose promoting conditional synchronization to first-class status allowing the synchronization object representing a suspended conditional synchronization to be passed as a value.
To demonstrate our idea we extend Concurrent ML and present several examples illustrating the expressiveness of first-class conditional synchronization (FCS). FCS has broadcast semantics making it appropriate for applications such as barriers and discrete-event simulation. The semantics also guarantee that no transient store configurations are missed. The end result facilitates abstraction and adds flexibility in writing concurrent programs. To minimize re-evaluation of synchronization conditions we propose a static analysis and translation that identifies expressions for the run-time system that could affect the value of a synchronization condition. The static analysis (which is based on an effect type system) therefore precludes excessive run-time system polling of synchronization conditions.
-
Ph.D. Thesis
1999
Metacomputing on on Commodity Computers
Baratloo, Arash
Abstract
|
PDF
Title: Metacomputing on on Commodity Computers
Candidate: Baratloo, Arash
Advisor(s): Kedem, Zvi
Abstract:
The advantages of using a set of networked commodity computers for parallel processing is well understood: such computers are cheap, widely available, and mostly underutilized. So why has the use of such environments for compute-intensive applications not proliferated? A major reason is that the inherent complexities of programming applications and coordinating their execution on networked computers outweighs the advantages.
In networked environments populated with multiuser commodity computers, both the computing speed and the number of available computers for executing parallel programs may change frequently and unpredictably. As a consequence, programs need to continuously adapt their execution to the changing environment. The execution of an application must therefore address such issues as dynamic changes in effective machine speeds, dynamic changes in the number of available machines, and sudden network and machine failures. It is not feasible for an application programmer to write programs that adapt to the behavior of a system whose critical aspects cannot be anticipated.
I will present a unified set of techniques to implement a virtual reliable parallel-processing platform on a set of unreliable computers with temporally varying execution speeds. These techniques are specifically designed for automatically adapting the execution of parallel programs to distributed environments. I will explain these techniques in the context of two software systems, Calypso and ResourceBroker, that have been built to validate them.
Calypso gives a programmer a simple tool to build and effectively execute parallel programs on a set of commodity computers. The notable properties of Calypso are: (1) a simple, intuitive programming model based on a virtual machine interface; (2) separation of logical and physical parallelism, allowing the source code to codify the algorithm rather than the execution environment; and (3) a runtime system that efficiently adapts the execution of the program to the dynamic nature of the runtime environment. ResourceBroker is a resource manager that demonstrates a novel technique to dynamically manage the assignment of computers to parallel programs. ResourceBroker can work with a variety of parallel systems, even transparently managing those that are not aware of its existence, such as PVM and MPI, and will distribute available resources fairly among multiple computations. As a result, a mix of parallel programs, written using diverse programming systems can effectively execute concurrently on a set of computers.
-
TR1999-778
1999
Comic Strips for Algorithm Visualization
Biermann, H.;
Cole, R.
Abstract
|
PDF
Title: Comic Strips for Algorithm Visualization
Author(s): Biermann, H.; Cole, R.
Abstract:
This paper presents visualizations of binary search trees and splay trees. The visualizations comprise sequences of figures or frames, called comic strips. Consecutive frames are viewed two at a time to facilitate user (viewer) understanding of the algorithm steps. The visualizations are implemented in Java to facilitate their wide use. This paper explores several other considerations in the design of instructional visualizations.
-
TR1999-781
1999
Piecewise Smooth Subdivision Surfaces with Normal Control
Biermann, H.;
Levin, A.; Zorin, D.
Abstract
|
PDF
Title: Piecewise Smooth Subdivision Surfaces with Normal Control
Author(s): Biermann, H.; Levin, A.; Zorin, D.
Abstract:
In this paper we introduce improved rules for Catmull-Clark and Loop subdivision that overcome several problems with the original schemes (lack of smoothness at extraordinary boundary vertices, folds near concave corners). In addition, our approach to rule modification allows generation of surfaces with prescribed normals, both on the boundary and in the interior, which considerably improves control of the shape of surfaces.
-
TR1999-784
1999
Stateless Remote Environment Navigation with View Compression
Biermann, H.;
Hertzmann, A.; Meyer, J.; Perlin, K.
Abstract
|
PDF
Title: Stateless Remote Environment Navigation with View Compression
Author(s): Biermann, H.; Hertzmann, A.; Meyer, J.; Perlin, K.
Abstract:
We present a set of very low bandwidth techniques for navigating remote environments. In a typical setup using our system, a virtual environment resides on a server machine, and one or more users explore the environment from client machines. Each client uses previous views of the environment to predict the next view, using the known camera motion and image-based rendering techniques. The server performs the same prediction, and sends only the difference between the predicted and actual view. Compressed difference images require significantly less bandwidth than the compressed images of each frame, and thus can yield much higher frame rates. To request a view, the client simply sends the coordinates of the desired view and of the previous view to the server. This avoids the overhead of maintaining connections between the server and each client.
No restrictions are placed on the scene or the camera motions; the view compression technique may be used with arbitrarily complex 3D scenes or dynamically changing views from a web camera or a digital television broadcast. A lossy compression scheme is presented in which the client estimates the cumulative error in each frame, and requests a comprete refresh before errors become noticable.
This work is applicable to remote exploration of virtual worlds such as on head-mounted displays, Digital Television, or over the Internet.
-
Ph.D. Thesis
1999
A Maximum Entropy Approach to Named Entity Recognition
Borthwick, Andrew
Abstract
|
PDF
Title: A Maximum Entropy Approach to Named Entity Recognition
Candidate: Borthwick, Andrew
Advisor(s): Grishman, Ralph
Abstract:
This thesis describes a novel statistical named-entity (i.e. ``proper name'') recognition system known as ``MENE'' (Maximum Entropy Named Entity). Named entity (N.E.) recognition is a form of information extraction in which we seek to classify every word in a document as being a person-name, organization, location, date, time, monetary value, percentage, or ``none of the above''. The task has particular significance for Internet search engines, machine translation, the automatic indexing of documents, and as a foundation for work on more complex information extraction tasks.
Two of the most significant problems facing the constructor of a named entity system are the questions of portability and system performance. A practical N.E. system will need to be ported frequently to new bodies of text and even to new languages. The challenge is to build a system which can be ported with minimal expense (in particular minimal programming by a computational linguist) while maintaining a high degree of accuracy in the new domains or languages.
MENE attempts to address these issues through the use of maximum entropy probabilistic modeling. It utilizes a very flexible object-based architecture which allows it to make use of a broad range of knowledge sources in making its tagging decisions. In the DARPA-sponsored MUC-7 named entity evaluation, the system displayed an accuracy rate which was well-above the median, demonstrating that it can achieve the performance goal. In addition, we demonstrate that the system can be used as a post-processing tool to enhance the output of a hand-coded named entity recognizer through experiments in which MENE improved on the performance of N.E. systems from three different sites. Furthermore, when all three external recognizers are combined under MENE, we are able to achieve very strong results which, in some cases, appear to be competitive with human performance.
Finally, we demonstrate the trans-lingual portability of the system. We ported the system to two Japanese-language named entity tasks, one of which involved a new named entity category, ``artifact''. Our results on these tasks were competitive with the best systems built by native Japanese speakers despite the fact that the author speaks no Japanese.
-
TR1999-787
1999
Recovering Non-Rigid 3D Shape from Image Streams
Bregler, C.;
Hertzmann, A.; Biermann, H.
Abstract
|
PDF
Title: Recovering Non-Rigid 3D Shape from Image Streams
Author(s): Bregler, C.; Hertzmann, A.; Biermann, H.
Abstract:
This paper addresses the problem of recovering 3D non-rigid shape models from image sequences. For example, given a video recording of a talking person, we would like to estimate a 3D model of the lips and the full head and its internal modes of variation. Many solutions that recover 3D shape from 2D image sequences have been proposed; these so-called structure-from-motion techniques usually assume that the 3D object is rigid. For example, Tomasi and Kanade's factorization technique is based on a rigid shape matrix, which produces a tracking matrix of rank 3 under orthographic projection. We propose a novel technique based on a non-rigid model, where the 3D shape in each frame is a linear combination of a set of basis shapes. Under this model, the tracking matrix is of higher rank, and can be factored in a three step process to yield to pose, configuration and shape. We demonstrate this simple but effective algorithm on video sequences of speaking people. We were able to recover 3D non-rigid facial models with high accuracy.
-
Ph.D. Thesis
1999
Algorithms for Nonlinear Models in Computational Finance and their Object-oriented Implementation
Buff, Robert
Abstract
|
PDF
Title: Algorithms for Nonlinear Models in Computational Finance and their Object-oriented Implementation
Candidate: Buff, Robert
Advisor(s): Avellaneda, Marco
Abstract:
Individual components of financial option portfolios cannot be evaluated independently under nonlinear models in mathematical finance. This entails increased algorithmic complexity if the options under consideration are path-dependent. We describe algorithms that price portfolios of vanilla, barrier and American options under worst-case assumptions in an uncertain volatility setting. We present a generalized approach to worst-case volatility scenarios in which only the duration, but not the starting dates of periods of high volatility risk are known. Our implementation follows object-oriented principles and is modular and extensible. Combinatorial and numerical algorithms are separate and orthogonal to each other. We make our tools available to a wide audience by using standard Internet technologies.
-
TR1999-791
1999
Optimizing Matrix Stability
Burke, J. V.;
Lewis, A. S.; Overton, M. L.
Abstract
|
PDF
Title: Optimizing Matrix Stability
Author(s): Burke, J. V.; Lewis, A. S.; Overton, M. L.
Abstract:
Given an affine subspace of square matrices, we consider the problem of minimizing the spectral abscissa (the largest real part of an eigenvalue). We give an example whose optimal solution has Jordan form consisting of a single Jordan block, and we show, using non-lipschitz variational analysis, that this behaviour persists under arbitrary small perturbations to the example. Thus although matrices with nontrivial Jordan structure are rare in the space of all matrices, they appear naturally in spectral abscissa minimization.
-
TR1999-790
1999
Variational Analysis of Non-Lipschitz Spectral Functions
Burke, J. V.;
Overton, M. L.
Abstract
|
PDF
Title: Variational Analysis of Non-Lipschitz Spectral Functions
Author(s): Burke, J. V.; Overton, M. L.
Abstract:
We consider spectral functions f o lambda, where f is any permutation-invariant mapping from C^n to R, and lambda is the eigenvalue map from C^{n X n} to C^n, ordering the eigenvalues lexicographically. For example, if f is the function "maximum real part", then f o lambda is the spectral abscissa, while if f is "maximum modulus", then f o lambda is the spectral radius. Both these spectral functions are continuous, but they are neither convex nor Lipschitz. For our analysis, we use the notion of subgradient extensively analyzed in Variational Analysis, R.T. Rockafellar and R. J.-B. Wets (Springer, 1998), which is particularly well suited to the variational analysis of non-Lipschitz spectral functions. We derive a number of necessary conditions for subgradients of spectral functions. For the spectral abscissa, we give both necessary and sufficient conditions for subgradients, and precisely identify the case where subdifferential regularity holds. We conclude by introducing the notion of semistable programming: minimizing a linear function of a matrix subject to linear constraints, together with the constraint that the eigenvalues of the matrix all lie in the right half-plane or on the imaginary axis. This is a generalization of semidefinite programming for non-Hermitian matrices. Using our analysis, we derive a necessary condition for a local minimizer of a semistable program, and give a generalization of the complementarity condition familiar from semidefinite programming.
-
TR1999-793
1999
Automatic Configuration and Run-time Adaptation of Distributed Applications
Chang, F.;
Karamcheti, V.
Abstract
|
PDF
Title: Automatic Configuration and Run-time Adaptation of Distributed Applications
Author(s): Chang, F.; Karamcheti, V.
Abstract:
Current technology trends point towards both an increased heterogeneity in hardware platforms and an increase in the mechanisms available to applications for controlling how these platforms are utilized. These trends motivate the design of resource-aware distributed applications, which proactively monitor and control utilization of the underlying platform, ensuring a desired performance level by adapting their behavior to changing resource characteristics.
This paper describes a general framework for enabling application adaptation on distributed platforms. The framework combines programmer specification of alternate execution behaviors (configurations) with automatic support for deciding when and how to adapt, relying extensively on two components: (1) profile-based modeling of application behavior, automatically generated by measuring application performance in a virtual execution environment with controllable resource consumption, and (2)application-specific continuous monitoring of current resource characteristics. The latter detects when application configurations need to change while the former guides the selection of a new configuration.
We evaluate these framework components using an interactive image visualization application. Our results demonstrate that starting from a natural specification of alternate application behaviors and an automatically generated performance database, our framework permits the application to both configure itself in diverse distributed environments and adapt itself to run-time changes in resource characteristics so as to satisfy user preferences of output quality.
-
TR1999-795
1999
Secure, User-level Resource-constrained Sandboxing
Chang, F.;
Itzkovitz, A.; Karamcheti, V.
Abstract
|
PDF
Title: Secure, User-level Resource-constrained Sandboxing
Author(s): Chang, F.; Itzkovitz, A.; Karamcheti, V.
Abstract:
The popularity of mobile and networked applications has resulted in an increasing demand for execution ``sandboxes''---environments that impose irrevocable qualitative and quantitative restrictions on resource usage. Existing approaches either verify application compliance to restrictions at start time (e.g., using certified code or language-based protection) or enforce it at run time (e.g., using kernel support, binary modification, or active interception of the application's interactions with the operating system). However, their general applicability is constrained by the fact that they are either too heavyweight and inflexible, or are limited in the kinds of sandboxing restrictions and applications they can handle.
This paper presents a secure user-level sandboxing approach for enforcing both qualitative and quantitative restrictions on resource usage of applications in distributed systems. Our approach actively monitors an application's interactions with the underlying system, proactively controlling it as desired to enforce the desired behavior. Our approach leverages a core set of user-level mechanisms that are available in most modern operating systems: fine-grained timers, monitoring infrastructure (e.g., the /proc filesystem), debugger processes, priority-based scheduling, and page-based memory protection. We describe implementations of a sandbox that imposes quantitative restrictions on CPU, memory, and network usage on two commodity operating systems: Windows NT and Linux. Our results show that application usage of resources can be restricted to within 3% of desired limits with minimal run-time overhead.
-
Ph.D. Thesis
1999
Prototyping a Prototyping Language
Chen, Hseu-Ming
Abstract
|
PDF
Title: Prototyping a Prototyping Language
Candidate: Chen, Hseu-Ming
Advisor(s): Harrison, Malcolm C.
Abstract:
The development of a prototyping language should follow the usual software-engineering methodology: starting with an evolvable, easily modifiable, working prototype of the proposed language. Rather than committing to the development of a mammoth compiler at the outset, we can design a translator from the prototyping language to another high-level language as a viable alternative. From a software-engineering point of view, the advantages of the translator approach are its shorter development cycle and lessened maintenance burden.
In prototyping language design, there are often innovative cutting-edge features which may not be well-understood. It is inevitable that numerous experimentations and revisions will be made to the current design, and hence supporting evolvability and modifiability is critical in the translator design.
In this dissertation we present an action-semantics-based framework for high-level source-to-source language translation. Action semantics is a form of denotational semantics that is based on abstract semantic algebra rather than Scott domain and lambda-notation. More specifically, this model not only provides a formal semantics definition for the source language and sets guidelines for implementations as well as migration, but also facilitates mathematical reasoning and a correctness proof of the entire translation process. The translation is geared primarily towards readability, maintainability, and type-preserving target programs, only secondarily towards reasonable efficiency.
We have acquired a collection of techniques for the translation of certain non-trivial high-level features of prototyping languages and declarative languages into efficient procedural constructs in imperative languages like Ada95, while using the abstraction mechanism of the target languages to maximize the readability of the target programs. In particular, we translate Griffin existential types into Ada95 using its object-oriented features, based on coercion calculus. This translation is actually more general, in that one can add existential types to a language (with modicum of extra syntax) supporting object-oriented paradigm without augmenting its type system, through intra-language transformation. We also present a type-preserving translation of closures which allows us to drop the whole-program-transformation requirement.
-
TR1999-792
1999
Edge-Coloring Bipartite Multigraphs in $0(E\log D)$ Time
Cole, R.;
Ost, K.; Schirra, S.
Abstract
|
PDF
Title: Edge-Coloring Bipartite Multigraphs in $0(E\log D)$ Time
Author(s): Cole, R.; Ost, K.; Schirra, S.
Abstract:
Let $V$, $E$, and $D$ denote the cardinality of the vertex set, the cardinality of the edge set, and the maximum degree of a bipartite multigraph $G$. We show that a minimal edge-coloring of $G$ can be computed in $O(E\log D)$ time.
-
TR1999-789
1999
Randomized Swap Matching in $O(m \log m \log |\Sigma| )$ time
Cole, R.;
Hariharan, R.
Abstract
|
PDF
Title: Randomized Swap Matching in $O(m \log m \log |\Sigma| )$ time
Author(s): Cole, R.; Hariharan, R.
Abstract:
We give a randomized algorithm for the {\em Pattern Matching with Swaps} problem which runs in $O(m \log m \log |\Sigma| )$ time on a text of length $2m-1$ and a pattern of length $m$ drawn from an alphabet set of size $|\Sigma|$. This algorithm gives the correct answer with probability at least $1-\frac{1}{m}$ and does not miss a match. The best deterministic algorithm known for this problem takes $O(m^{4/3} \mbox{polylog}(m))$ time.
-
Ph.D. Thesis
1999
Distributed intelligence with bounded rationality: Applications to economies and networks
Even, Ron
Abstract
|
PDF
Title: Distributed intelligence with bounded rationality: Applications to economies and networks
Candidate: Even, Ron
Advisor(s): Mishra, Bud
Abstract:
This dissertation examines bounded rationality as a tool in distributed systems of intelligent agents. We have implemented, in Java, a simulator for complex adaptive systems called CAF??. We use our framework to simulate a simple network and compare the effectiveness of bounded rationality at routing and admission control to that of a more traditional, source based, greedy routing approach. We find that the boundedly rational approach is particularly effective when user behavior is synchronized, such as occurs during breaking news releases on the World Wide Web, for example. We develop the key structures of our framework by first examining, through simulation, the behavior of boundedly rational speculators in a simple economy. We find them to be instrumental in bringing the economy quickly to price equilibrium as well as in maintaining the equilibrium in the face of changing conditions. We draw several interesting conclusions as to the key similarities between economy and computational systems and also, the situations where they differ drastically.
-
Ph.D. Thesis
1999
Pattern Discovery in Biology: Theory and Applications
Floratos, Aristidis
Abstract
|
PDF
Title: Pattern Discovery in Biology: Theory and Applications
Candidate: Floratos, Aristidis
Advisor(s): Boppana, Ravi; Rigoutsos, Isidore
Abstract:
Molecular Biology studies the composition and interactions of life's agents, namely the various molecules (e.g. DNA, proteins, lipids) sustaining the living process. Traditionally, this study has been performed in wet labs using mostly physicochemical techniques. Such techniques, although precise and detailed, are often cumbersome and time consuming. On top of that, recent advances in sequencing technology have allowed the rapid accumulation of DNA and protein data. As a result a gap has been created (and is constantly being expanded): on the one side there is a rapidly growing collection of data containing all the information upon which life is built; and on the other side we are currently unable to keep up with the study of this data, impaired by the limits of existing analysis tools. It is obvious that alternative analysis techniques are badly needed. In this work we examine how computational methods can help in drilling the information contained in collections of biological data. In particular, we investigate how sequence similarity among various macromolecules (e.g. proteins) can be exploited towards the extraction of biologically useful information.
-
Ph.D. Thesis
1999
Matching Algorithms and Feature Match Quality Measures for Model-Based Object Recognition with Applications toAutomatic Target Recognition
Garcia-Keller, Martin
Abstract
|
PDF
Title: Matching Algorithms and Feature Match Quality Measures for Model-Based Object Recognition with Applications toAutomatic Target Recognition
Candidate: Garcia-Keller, Martin
Advisor(s): Hummel, Robert
Abstract:
In the fields of computational vision and image understanding, the object recognition problem can often be formulated as a problem of matching a collection of model features to features extracted from an observed scene. This dissertation is concerned with the use of feature-based match similarity measures and feature match algorithms in object detection and classification in the context of image understanding from complex signature data. Our applications are in the domains of target vehicle recognition from radar imagery, and binocular stereopsis.
In what follows, we will consider “image understanding” to encompass the set of activities necessary to identify objects in visual imagery and to establish meaningful three-dimensional relationships between the objects themselves, or between the object and the viewer. The main goal in image understanding then involves the transformation of images to symbolic representation, effectively providing a high-level description of an image in terms of objects, object attributes, and relationships between known objects. As 2 such, image understanding subsumes the capabilities traditionally associated with image processing, object recognition and artificial vision [Crevier and Lepage 1997].
In human and/or biological vision systems, the task of object recognition is a natural and spontaneous one. Humans can recognize immediately and without effort a huge variety of objects from diverse perceptual cues and multiple sensorial inputs. The operations involved are complex and inconspicuous psychophysical and biological processes, including the use of properties such as shape, color, texture, pattern, motion, context, as well as considerations based on contextual information, prior knowledge, expectations, functionality hypothesis, and temporal continuity. These operations and their relation to machine object recognition and artificial vision are discussed in detail elsewhere [Marr 1982], [Biederman 1985], but they are not our concern in this thesis.
In this research, we consider only the simpler problem of model-based vision, where the objects to be recognized come from a library of three-dimensional models known in advance, and the problem is constrained using context and domain-specific knowledge.
The relevance of this work resides in its potential to support state-of-the-art developments in both civilian and military applications including knowledge-based image analysis, sensors exploitation, intelligence gathering, evolving databases, 3 interactive environments, etc. A large number of applications are reviewed below in section 1.4. Experimental results are presented in Chapters 5, 6, and
-
TR1999-777
1999
An Improved Intra-procedural May-alias Analysis Algorithm
Goyal, D.
Abstract
|
PDF
Title: An Improved Intra-procedural May-alias Analysis Algorithm
Author(s): Goyal, D.
Abstract:
Hind et al.~\cite({Hind99}) use a standard data flow framework \cite{Rosen79, Tarjan81} to formulate an intra-procedural may-alias computation. The intra-procedural aliasing information is computed by applying well-known iterative techniques to the Sparse Evaluation Graph (SEG) (\cite{Choi91}). The computation requires a transfer function for each node that causes a potential pointer assignment (relating the data flow information flowing into and out of the node), and a set of aliases holding at the entry node of the SEG. The intra-procedural analysis assumes that precomputed information in the form of summary functions is available for all function-call sites in the procedure being analyzed. The time complexity of the intra-procedural may-alias computation for the algorithm presented by Hind et al.~(\cite{Hind99}) is $O(N^6)$ in the worst case (where $N$ is the size of the SEG). In this paper we present a worst case $O(N^3)$ time algorithm to compute the same may-alias information.
-
Ph.D. Thesis
1999
Learning to Play Network Games
Greenwald, Amy
Abstract
|
PDF
Title: Learning to Play Network Games
Candidate: Greenwald, Amy
Advisor(s): Mishra, Bud
Abstract:
This talk concerns the strategic behavior of automated agents in the framework of network game theory, with particular focus on the collective behavior that arises via learning. In particular, ideas are conveyed on both the theory and simulation of learning in network games, in terms of two sample applications. The first application is network control, presented via an abstraction known as the Santa Fe bar problem, for which it is proven that rational learning does *not* converge to Nash equilibrium, the classic game-theoretic solution concept. On the other hand, it is observed via simulations, that low-rationality learning, where agents trade-off between exploration and exploitation, typically converges to mixed strategy Nash equilibria in this game. The second application is the economics of shopbots - agents that automatically search the Internet for price and product information - in which learning yields behaviors ranging from price wars to tacit collusion, with sophisticated low-rationality learning algorithms converging to Nash equilibria. This work forms part of a larger research program that advocates learning and game theory as a framework in which to model the interactions of computational agents in network domains.
-
Ph.D. Thesis
1999
Experiments in refining graphical interface widgets
Hecker, Yaron Chanoch
Abstract
|
PDF
Title: Experiments in refining graphical interface widgets
Candidate: Hecker, Yaron Chanoch
Abstract:
This thesis investigates GUIs and their shortcomings. We demonstrate that there is room for refinement of existing graphical user interfaces, including those interfaces with which we are most familiar. A foundation for our designs is first established. It consists of known human capabilities, especially concerning hand-eye coordination, short term and long term memory, and visual perception. Accumulated experience in static and animated visual design provides additional guides for our work. On the basis of this foundation we analyze existing widgets. A series of new widgets are then proposed to address observed deficiencies in existing designs for scrolling, multiple copy and paste in text environments, text insertion and selection, and window management. Lessons learned from analyzing our new designs and observations of existing widgets are generalized into principles of widget design.
-
TR1999-783
1999
Interactive 3D Scene Reconstruction from Images
Hertzmann, A.
Abstract
|
PDF
Title: Interactive 3D Scene Reconstruction from Images
Author(s): Hertzmann, A.
Abstract:
We propose an interactive framework for reconstructing an arbitrary 3D scene consistent with a set of images, for use in example-based image synthesis. Previous research has used human input to specify feature matches, which are then processed off-line; however, it is very difficult to correctly match images without feedback. The central idea of this paper is to perform and display 3D reconstruction during user modification. By allowing the user to interactively manipulate the image correspondence and the resulting 3D reconstruction, we can exploit both the user's intuitive image understanding and the computer's processing power.
-
Ph.D. Thesis
1999
Automated Software Deployment
Jai, Benchiao
Abstract
|
PDF
Title: Automated Software Deployment
Candidate: Jai, Benchiao
Advisor(s): Siegel, Alan
Abstract:
The work users do with an application can be divided into actual work accomplished using the application and overhead performed in order to use the application. The latter can be further partitioned based on the time at which the work is performed: before (application location and delivery), during (installation) and after (upgrade) the installation of the application. This category can be characterized as the software deployment overhead. This thesis presents a component architecture RADIUS (Rapid Application location, Delivery, Installation and Upgrade System) in which applications can be built with no software deployment overhead to the users. An application is deployed automatically by simply giving the user a document produced by the application. Furthermore, the facilities in RADIUS make the applications self-upgrading. In the end, the users perform no deployment overhead work at all.
The conventional way of using an application is to install the application first, then start using documents of the application. The object-oriented programming (OOP) paradigm suggests that this order should be reversed: the data should lead to the code. However, almost all software fails to meet this model of design at the persistence level. While modern software often use OOP at the program level, the underlying operating systems do not support OOP at the document/file level. OOP languages use pointers to methods to indicate what operations can be performed on the objects. We extend the idea to include "pointers to applications". Each document has an attached application pointer, which is read by RADIUS when the document is opened. This application pointer is then used to locate and deliver the application module necessary for the document.
RADIUS is designed to be compatible with existing technologies and requires no extensions to either programming languages or operating systems. It is orthogonal to programming tools, is language-independent and compatible among operating systems, and consequently does not impose limitations on which environments the developers can use. We illustrate the implementations for the two most popular platforms today - C++ on Windows, and Java. RADIUS is also orthogonal to other component systems such as CORBA or COM and is easy to integrate with them.
-
TR1999-780
1999
A Domain Decomposition Method with Lagrange Multipliers for Linear Elasticity
Klawonn, A.;
Widlund, O. B.
Abstract
|
PDF
Title: A Domain Decomposition Method with Lagrange Multipliers for Linear Elasticity
Author(s): Klawonn, A.; Widlund, O. B.
Abstract:
A new domain decomposition method with Lagrange multipliers for elliptic problems is introduced. It is based on a reformulation of the well--known FETI method as a saddle point problem with both primal and dual variables as unknowns. The resulting linear system is solved with block--structured preconditioners combined with a suitable Krylov subspace method. This approach allows the use of inexact subdomain solvers for the positive definite subproblems. It is shown that the condition number of the preconditioned saddle point problem is bounded independently of the number of subregions and depends only polylogarithmically on the number of degrees of freedom of individual local subproblems. Numerical results are presented for a plane stress cantilever membrane problem.
- TR1999-796 1999 FETI and Neumann--Neumann Iterative Substructuring Methods: Connections and New Results Klawonn, A.; Widlund, O. Abstract | PDF
-
Ph.D. Thesis
1999
Toward Stronger User Authentication
Monrose, Newman Fabian
Abstract
|
PDF
Title: Toward Stronger User Authentication
Candidate: Monrose, Newman Fabian
Advisor(s): Kedem, Zvi
Abstract:
Password-based authentication is the dominant mechanism for verifying the identity of computer users, even though it is well known that people frequently choose passwords that are vulnerable to dictionary attacks. This talk addresses the issue of improving the security of password-based authentication, and presents authentication techniques that are more secure than traditional approaches against both on-line and off-line attacks.
We present a technique for strengthening the security of a textual password by augmenting it with biometric information such as the duration and latency of keystrokes during entry of the password. Thereby, both the password and the user's typing pattern are used to corroborate the user's identity. The technique presented adapts to gradual changes in a user's typing pattern while maintaining the same strengthened password across authenticated sessions. Moreover, our technique does not reveal which of a user's keystroke features are used to generate the corresponding strengthened password. This knowledge is hidden even from an attacker who captures all the system information used by the authentication server, and we show that our technique increases significantly the amount of work such an attacker must perform.
Additionally, we present an alternative technique for user authentication that exploits features of graphical input devices. We propose and evaluate ``graphical passwords'', which serve the same purpose as textual passwords, but consist of handwritten drawings, possibly in addition to text. Graphical passwords derive their strength from the fact that graphical input devices allow one to decouple the positions of inputs from the temporal order in which these inputs occur. We use this independence to build new password-based authentication schemes that are convincingly stronger than conventional methods.
-
Ph.D. Thesis
1999
Optimization Over Symmetric Cones
Nayakkankuppam, Madhu
Abstract
|
PDF
Title: Optimization Over Symmetric Cones
Candidate: Nayakkankuppam, Madhu
Advisor(s): Overton, Michael
Abstract:
We consider the problem of optimizing a linear function over the intersection of an affine space and a special class of closed, convex cones, namely the symmetric cones over the reals. This problem subsumes linear programming, convex quadratically constrained quadratic programming, and semidefinite programming as special cases. First, we derive some perturbation results for this problem class. Then, we discuss two solution methods: an interior-point method capable of delivering highly accurate solutions to problems of modest size, and a first order bundle method which provides solutions of low accuracy, but can handle much larger problems. Finally, we describe an application of semidefinite programming in electronic structure calculations, and give some numerical results on sample problems.
-
Ph.D. Thesis
1999
Efficient Computational Model for Energy Propagation in Geoemtrically Represented Large Envirnoments
Rajkumar, Ajay
Abstract
|
PDF
Title: Efficient Computational Model for Energy Propagation in Geoemtrically Represented Large Envirnoments
Candidate: Rajkumar, Ajay
Advisor(s): Perlin, Ken
Abstract:
Current radio propagation algorithms are very narrowly focused to specific types of input models and do not scale well to an increase in the number of receiver locations or the number of polygons in an input model. In this dissertation, we look at the problem of efficiently computing energy propagation at radio frequencies in a range of geometrically defined environments from a given transmitter location and for various transmitter and receiver characteristics. To achieve this goal, we propose a unified approach to radio propagation for different types of input models and their combinations as well, by representing the geometry as a binary space partitioning tree and broadcasting energy from the source. The approach is both scalable to large input models as well as dynamically adapts to its scale without incurring unreasonable computational cost. The proposed approach is equally effective for acoustic modeling as well.
We present a new adaptive ray-beam tracing algorithm which initially tessellates the surface of a transmitter into four-sided polygons. Each polygon is cast as a beam which avoids arbitrarily large gaps or overlaps between adjacent beams. For fast intersection computation each beam carries information of its medial ray as well. As the computation proceeds a ray-beam is adaptively subdivided depending on various parameters. The proposed algorithm has sublinear time complexity in terms of the number of receiver locations.
Modeling diffraction off an edge of a wedge is important to compute radio signal that reaches the shadow region of the wedge. Storing these edges explicitly in a data structure can be very expensive for large input models and especially for terrain-based models that have significant elevation variations. We present a new runtime edge-detection algorithm instead of storing the edges statically and its adaptation to binary space partitioning tree represented environments.
We have developed a propagation prediction system called Propagate using these algorithms with good statistical correlation between predicted and measured results for a number of different input models. The proposed algorithms have been used to model several other important computations related to a cellular network of transmitters such as signal strength and path loss, delay spread, angular spread, carrier-to-interference ratio, and modeling of different antenna diversity schemes.
-
TR1999-776
1999
Memory Classification Analysis for Recursive C Structures
Schwartz, N.
Abstract
|
PDF
Title: Memory Classification Analysis for Recursive C Structures
Author(s): Schwartz, N.
Abstract:
The long-time quest of the parallelizing compiler community for effective aggregate summarization techniques has led to increasingly sophisticated array section representations. In this paper, we show how the latest of these can be used for nested C structure summarization. We then show how this summarization notation can be used to make Shape Analysis precise on arbitrarily low-level code. Combining these techniques, we show that an appropriate generalization of Memory Classification Analysis, originally presented for Fortran programs, provides a flow dependence summarization technique for C code as well, while avoiding code normalization compared with previous techniques. In so doing, we break down perhaps the final conceptual barriers in the construction of practical programmer-friendly C parallelizing compilers.
-
TR1999-779
1999
Parallel Programming for Everyone
Schwartz, N.
Abstract
|
PDF
Title: Parallel Programming for Everyone
Author(s): Schwartz, N.
Abstract:
This article proposes a novel architectural model which augments the latest developments in automatic program parallelization and distributed systems to achieve a level of practicality as yet unknown to either field. Today's premier automatic parallelization model is well suited to implementation on a network of commodity workstations (NOW) using only a very thin layer of software support. We describe a parallelizing compiler framework which greatly simplifies the parallelization of even highly complex sequential applications while producing extremely effective parallelizations for the NOW. We further show how our model greatly enhances programmer productivity through the use of minimally invasive C++ transformation techniques, aiding both debugging and portability.
-
TR1999-782
1999
Sparse Constant Propagation via Memory Classification Analysis
Schwartz, N.
Abstract
|
PDF
Title: Sparse Constant Propagation via Memory Classification Analysis
Author(s): Schwartz, N.
Abstract:
This article presents a novel Sparse Constant Propagation technique which provides a heretofore unknown level of practicality. Unlike other techniques which are based on data flow, it is based on the execution-order summarization sweep employed in Memory Classification Analysis (MCA), a technique originally developed for array dependence analysis. This methodology achieves a precise description of memory reference activity within a summary representation that grows only linearly with program size. Because of this, the collected sparse constant information need not be artificially limited to satisfy classical data flow lattice requirements, which constrain other algorithms to discard information in the interests of efficient termination. Sparse Constant Propagation is not only more effective within the MCA framework, but it in fact generalizes the framework. Original MCA provids the means to break only simple induction and reduction types of flow-dependences. The integrated framework provides the means to also break flow-dependences for which array values can be propagated.
-
Ph.D. Thesis
1999
Automatic Parallelization: An Incremental, Optimistic, Practical Approach
Schwartz, Naftali
Abstract
|
PDF
Title: Automatic Parallelization: An Incremental, Optimistic, Practical Approach
Candidate: Schwartz, Naftali
Advisor(s): Kedem, Zvi
Abstract:
The historic focus of Automatic Parallelization efforts has been limited in two ways. First, parallelization has generally been attempted only on codes which can be proven to be parallelizeable. Unfortunately, the requisite dependence analysis is undecidable, and today's applications demonstrate that this restriction is more than theoretical. Second, parallel program generation has generally been geared to custom multiprocessing hardware. Although a network of commodity workstations (NOW) could theoretically be harnessed to serve as a multiprocessing platform, the NOW has characteristics which are at odds with effective utilization.
This thesis shows that by restricting our attention to the important domain of ``embarrassingly parallel'' applications, leveraging existing scalable and efficient network services, and carefully orchestrating a synergy between compile-time transformations and a small runtime system, we can achieve a parallelization that not only works in the face of inconclusive program analysis, but is indeed efficient for the NOW. We optimistically parallelize loops whose memory access behavior is unknown, relying on the runtime system to provide efficient detection and recovery in the case of an overly optimistic transformation. Unlike previous work in speculative parallelization, we provide a methodology which is not tied to the Fortran language, making it feasible as a generally useful approach. Our runtime system implements Two-Phase Idempotent Eager Scheduling (TIES) for efficient network execution, providing an Automatic Parallelization platform with performance scalability for the NOW.
Our transformation divides the original program into a server and zero or more clients. The server program is a specialization of the original application with each parallel loop replaced with a scheduling call to the client which comprises the body of that parallel loop. The scheduler remotely executes the appropriate instances of this client on available machines.
We describe the transformation and runtime system in detail, and report on the automatic transformation achieved by our implementation prototype in two case studies. In each of these cases, we were able to automatically locate the important coarse-grained loops, construct a shared-memory layout, and generate appropriate server and client code. Furthermore, we show that our generated parallel programs achieve near-linear speedups for sufficiently large problem sizes.
-
TR1999-788
1999
A FETI Domain Decomposition Method for Maxwell's Equations with Discontinuous Coefficients in Two Dimensions
Toselli, A.;
Klawonn, A.
Abstract
|
PDF
Title: A FETI Domain Decomposition Method for Maxwell's Equations with Discontinuous Coefficients in Two Dimensions
Author(s): Toselli, A.; Klawonn, A.
Abstract:
A class of FETI methods for the edge element approximation of vector field problems in two dimensions is introduced and analyzed. First, an abstract framework is presented for the analysis of a class of FETI methods where a natural coarse problem, associated with the substructures, is lacking. Then, a family of FETI methods for edge element approximations is proposed. It is shown that the condition number of the corresponding method is independent of the number of substructures and grows only polylogarithmically with the number of unknowns associated with individual substructures. The estimate is also independent of the jumps of both of the coefficients of the original problem. Numerical results validating the theoretical bounds are given. The method and its analysis can be easily generalized to Raviart-Thomas element approximations in two and three dimensions.
-
TR1999-785
1999
Domain Decomposition Methods for Vector Field Problems
Toselli, A.
Abstract
|
PDF
Title: Domain Decomposition Methods for Vector Field Problems
Author(s): Toselli, A.
Abstract:
Finite element approximation of vector equations gives rise to very large, sparse linear systems. In this dissertation, we study some domain decomposition methods for finite element approximations of vector--valued problems, involving the curl and the divergence operators. Edge and Raviart--Thomas finite element are employed. Problems involving the curl operator arise, for instance, when approximating Maxwell's equations and the stream function--vorticity formulation of Stokes' problem, while mixed approximations of second order elliptic equations and stabilized mixed formulations of the Stoke' problem give rise to problems involving the divergence operator.
We first consider Maxwell's equations in three dimensional conductive media using implicit time--stepping. We prove that the condition number of a two-level overlapping algorithm is bounded independently of the number of unknowns, the number of subregions, and the time step.
For the same equation in two dimensions, we consider two new iterative substructuring methods. The first one is based on individual edges, while the second one is a Neumann-Neumann method. We show that the condition numbers of the corresponding methods increase slowly with the number of unknowns in each substructure, but are independent of the time step and even large jumps of the coefficients. We also analyze similar preconditioners for a three--dimensional vector problem involving the divergence operator, and prove that the preconditioners are quasi--optimal and scalable in this case as well.
For each method, we provide a series of numerical experiments, that confirm our theoretical analysis.
This work generalizes well--known results for scalar second order elliptic equations and has required the development of several new technical tools.
-
TR1999-786
1999
Neumann-Neumann Methods for Vector Field Problems
Toselli, A.
Abstract
|
PDF
Title: Neumann-Neumann Methods for Vector Field Problems
Author(s): Toselli, A.
Abstract:
In this paper, we study some Schwarz methods of Neumann-Neumann type for some vector field problems, discretized with the lowest order Raviart-Thomas and Nedelec finite elements. We consider a hybrid Schwarz peconditioner consisting of a coarse component, which involves the solution of the original problem on a coarse mesh, and local ones, which involve the solution of Neumann problems on the elements of the coarse triangulation, also called substructures. We show that the condition number of the corresponding method is independent of the number of substructures and grows logarithmically with the number of unknowns associated with an individual substructure. It is also independent of the jumps of both the coefficients of the original problem. The numerical results presented validate our theoretical bound.
-
TR1999-794
1999
Transparent Network Connectivity in Dynamic Cluster Environments
Xiadong, F.;
Wang, H.; Karamcheti, V.
Abstract
|
PDF
Title: Transparent Network Connectivity in Dynamic Cluster Environments
Author(s): Xiadong, F.; Wang, H.; Karamcheti, V.
Abstract:
Improvements in microprocessor and networking performance have made networks of workstations a very attractive platform for high-end parallel and distributed computing. However, the effective deployment of such environments requires addressing two problems not associated with dedicated parallel machines: heterogeneous resource capabilities and dynamic availability. Achieving good performance requires that application components be able to migrate between cluster resources and efficiently adapt to the underlying resource capabilities. An important component of the required support is maintaining network connectivity, which directly impacts on the transparency of migration to the application and its performance after migration. Unfortunately, existing approaches rely on either extensive operating system modifications or new APIs to maintain network connectivity, both of which limits their wider applicability.
This paper presents the design, implementation, and performance of a transparent network connectivity layer for dynamic cluster environments. Our design uses the techniques of API interception and virtualization to construct a transparent layer in user space; use of the layer requires no modification either to the application or the underlying operating system and messaging layers. Our layer enables the migration of application components without breaking network connections, and additionally permits adaptation to the characteristics of the underlying networking substrate. Experiments with supporting a persistent socket interface in two environments---an Ethernet LAN on top of TCP/IP, and a Myrinet LAN on top of Fast Messages---show that our approach incurs minimal overheads and can effectively select the best substrate for implementing application communication requirements.
-
Ph.D. Thesis
1999
Destructive Effect Analysis And Finite Differencing For Strict Functional Languages
Yung, Chung
Abstract
|
PDF
Title: Destructive Effect Analysis And Finite Differencing For Strict Functional Languages
Candidate: Yung, Chung
Advisor(s): Goldberg, Benjamin
Abstract:
Destructive update optimization is critical for writing scientific codes in functional languages. Pure functional languages do not allow mutations, destructive updates, or selective updates so that the straightforward implementations of functional languages induces large amounts of copying to preserve program semantics. The unnecessary copying of data can increase both the execution time and the memory requirements of an application. Destructive update optimization makes an essential improvement to the implementation of functional programs with compound data structures, such as arrays, sets, and aggregates. Moreover, for many of the compiler optimization techniques that depend on the side-effects, destructive update analysis provide the input for applying such optimization techniques. Among other compiler optimization techniques, finite differencing captures common yet distinctive program constructions of costly repeated calculations and transforms them into more efficient incremental program constructions.
In this dissertation, we develop a new approach to destructive update analysis, called destructive effect analysis . We present the semantic model and the abstract interpretation of destructive effect analysis. We designed EAS , an experimental applicative language with set expressions. The implementation of the destructive effect analysis is integrated with the optimization phase of our experimental compiler of EAS. We apply finite differencing to optimize pure functional programs, and we show the performance improvement that results from applying the finite differencing optimization together with the destructive update optimization.