Theses & Reports
Instructions for submitting a technical report or thesis.
You can find technical reports published prior to 1990 archived here.
-
TR2003-839
2003
A kernel-independent fast multipole algorithm
Biros, George;
Ying, Lexing; Zorin, Denis
Abstract
|
PDF
Title: A kernel-independent fast multipole algorithm
Author(s): Biros, George; Ying, Lexing; Zorin, Denis
Abstract:
Title: A kernel-independent fast multipole algorithm (NYU-CS-TR839) Authors: George Biros, Lexing Ying, and Denis Zorin Abstract: We present a new fast multipole method for particle simulations. The main feature of our algorithm is that is kernel independent, in the sense that no analytic expansions are used to represent the far field. Instead we use equivalent densities, which we compute by solving small Dirichlet-type boundary value problems. The translations from the sources to the induced potentials are accelerated by singular value decomposition in 2D and fast Fourier transforms in 3D. We have tested the new method on the single and double layer operators for the Laplacian, the modified Laplacian, the Stokes, the modified Stokes, the Navier, and the modified Navier operators in two and three dimensions. Our numerical results indicate that our method compares very well with the best known implementations of the analytic FMM method for both the Laplacian and modified Laplacian kernels. Its advantage is the (relative) simplicity of the implementation and its immediate extension to more general kernels.
-
TR2003-837
2003
An Embedded Boundary Integral Solver for the Stokes Equations
Biros, George;
Ying, Lexing; Zorin, Denis
Abstract
|
PDF
Title: An Embedded Boundary Integral Solver for the Stokes Equations
Author(s): Biros, George; Ying, Lexing; Zorin, Denis
Abstract:
We present a new method for the solution of the Stokes equations. Our goal is to develop a robust and scalable methodology for two and three dimensional, moving-boundary, flow simulations. Our method is based on Anita Mayo's method for the Poisson's equation: "The Fast Solution of Poisson's and the Biharmonic Equations on Irregular Regions", SIAM J. Num. Anal., 21 (1984), pp. 285--299. We embed the domain in a rectangular domain, for which fast solvers are available, and we impose the boundary conditions as interface (jump) conditions on the velocities and tractions. We use an indirect boundary integral formulation for the homogeneous Stokes equations to compute the jumps. The resulting integral equations are discretized by Nystrom's method. The rectangular domain problem is discretized by finite elements for a velocity-pressure formulation with equal order interpolation bilinear elements (Q1-Q1). Stabilization is used to circumvent the inf-sup condition for the pressure space. For the integral equations, fast matrix vector multiplications are achieved via a N log N algorithm based on a block representation of the discrete integral operator, combined with (kernel independent) singular value decomposition to sparsify low-rank blocks. Our code is built on top of PETSc, an MPI based parallel linear algebra library. The regular grid solver is a Krylov method (Conjugate Residuals) combined with an optimal two-level Schwartz-preconditioner. For the integral equation we use GMRES. We have tested our algorithm on several numerical examples and we have observed optimal convergence rates.
-
TR2003-835
2003
Survey: Eigenvector Analysis in Webpage Rankings
Chang, Hung-Hsien
Abstract
|
PDF
Title: Survey: Eigenvector Analysis in Webpage Rankings
Author(s): Chang, Hung-Hsien
Abstract:
Two major techniques have been proposed for using the structure of links in the World Wide Web to determine the relative significance of Web Pages. The PageRank algorithm \cite{BP98}, which is a critical part of the Google search engine, gives a single measure of importance of each page in the Web. The HITS algorithm \cite{K98} applies to a set of pages believed relevant to a given query, and assigns two values to each page: the degree to which the page is a hub and the degree to which it is an authority. Both algorithms have a natural interpretation in terms of a random walk over the set of pages involved, and in both cases the computation involved amounts to computing an eigenvector over the transition matrix for this random walk.
This paper surveys the literature discussing these two techniques and their variants, and their connection to random walks and eigenvector computation. It also discusses the stability of these techniques under small changes in the Web link structure.
-
TR2003-845
2003
Shrinkage-Based Similarity Metric for Cluster Analysis of Microarray Data
Cherepinsky, Vera;
Feng, Jiawu; Rejali, Marc; Mishra, Bud
Abstract
|
PDF
Title: Shrinkage-Based Similarity Metric for Cluster Analysis of Microarray Data
Author(s): Cherepinsky, Vera; Feng, Jiawu; Rejali, Marc; Mishra, Bud
Abstract:
The current standard correlation coefficient used in the analysis of microarray data, including gene expression arrays, was introduced in [1]. Its formulation is rather arbitrary. We give a mathematically rigorous derivation of the correlation coefficient of two gene expression vectors based on James-Stein Shrinkage estimators. We use the background assumptions described in [1], also taking into account the fact that the data can be treated as transformed into normal distributions. While [1] uses zero as an estimator for the expression vector mean μ, we start with the assumption that for each gene, μ is itself a zero-mean normal random variable (with a priori distribution N(0,τ 2)), and use Bayesian analysis to update that belief, to obtain a posteriori distribution of μ in terms of the data. The estimator for μ, obtained after shrinkage towards zero, differs from the mean of the data vectors and ultimately leads to a statistically robust estimator for correlation coefficients.
To evaluate the effectiveness of shrinkage, we conducted in silico experiments and also compared similarity metrics on a biological example using the data set from [1]. For the latter, we classified genes involved in the regulation of yeast cell-cycle functions by computing clusters based on various definitions of correlation coefficients, including the one using shrinkage, and contrasting them against clusters based on the activators known in the literature. In addition, we conducted an extensive computational analysis of the data from [1], empirically testing the performance of different values of the shrinkage factor γ and comparing them to the values of γ corresponding to the three metrics adressed here, namely, γ=0 for the Eisen metric, γ=1 for the Pearson correlation coefficient, and γ computed from the data for the Shrinkage metric.
The estimated "false-positives" and "false-negatives" from this study indicate the relative merits of clustering algorithms based on different statistical correlation coefficients as well as the sensitivity of the clustering algorithm to small perturbations in the correlation coefficients. These results indicate that using the shrinkage metric improves the accuracy of the analysis.
All derivation steps are described in detail; all mathematical assertions used in the derivation are proven in the appendix.
[1] Eisen, M.B., Spellman, P.T., Brown, P.O., and Botstein, D. (1998), PNAS USA 95, 14863-14868.
-
Ph.D. Thesis
2003
Comparing and Improving Centralized and Distributed Techniques for Coordinating Massively Parallel Shared-Memory Systems
Freudenthal, Eric
Abstract
|
PDF
Title: Comparing and Improving Centralized and Distributed Techniques for Coordinating Massively Parallel Shared-Memory Systems
Candidate: Freudenthal, Eric
Advisor(s): Gottlieb, Allan
Abstract:
Two complementary approaches have been proposed to achieve high performance inter-process coordination on highly parallel shared-memory systems. Gottlieb et. al. introduced the technique of combining concurrent memory references, thereby reducing hot spot contention and enabling the bottleneck-free execution of algorithms referencing a small number of shared variables. Mellor- Crummey and Scott introduced an alternative distributed local-spin technique that minimizes hot spot contention by not polling hotspot variables and exploiting the availability of processor-local shared memory. My principal contributions are a comparison of these two approaches, and significant improvements to the former.
The NYU Ultra3 prototype is the only system built that implements memory reference combining. My research utilizes micro-benchmark simulation studies of massively parallel Ultra3 systems executing coordination algorithms. This investigation detects problems in the Ultra3 design that result in higher-than-expected memory latency for reference patterns typical of busy-wait polling. This causes centralized coordination algorithms to perform poorly. Several architectural enhancements are described that significantly reduce the latency of these access patterns, thereby improving the performance of the centralized algorithms.
I investigate existing centralized algorithms for readers-writers and barrier coordination, all of which require fetch-and-add, and discovered variants that require fewer memory accesses (and hence have shorter latency). In addition,my evaluation includes novel algorithms that require only a restricted form of fetch-and-add.
Coordination latency of these algorithms executed on the enhanced combining architecture is compared to the latency of the distributed local-spin alternatives. These comparisons indicate that the distributed local-spin dissemination barrier, which generates no hot spot tra c, has latency slightly inferior to the best centralized algorithms investigated. However, for the less structured readers-writers problem, the centralized algorithms significantly outperform the distributed local-spin algorithm.
-
TR2003-849
2003
Comparing the Performance of Centralized and Distributed Coordination on Systems with Improved Combining Switches
Freudenthal, Eric;
Gottlieb, Allan
Abstract
|
PDF
Title: Comparing the Performance of Centralized and Distributed Coordination on Systems with Improved Combining Switches
Author(s): Freudenthal, Eric; Gottlieb, Allan
Abstract:
Memory system congestion due to serialization of hot spot accesses can adversely affect the performance of interprocess coordination algorithms. Hardware and software techniques have been proposed to reduce this congestion and thereby provide superior system performance. The combining networks of Gottlieb et al. automatically parallelize concurrent hot spot memory accesses, improving the performance of algorithms that poll a small number of shared variables. The widely used MCS distributed-spin algorithms take a software approach: they reduce hot spot congestion by polling only variables stored locally. Our investigations detected performance problems in existing designs for combining networks and we propose mechanisms that alleviate them. Simulation studies described herein indicate that a centralized readers writers algorithms executed on the improved combining networks have performance at least competitive to the MCS algorithms.
-
TR2003-848
2003
QTM: Trust Management with Quantified Stochastic Attributes
Freudenthal, Eric;
Karamcheti, Vijay
Abstract
|
PDF
Title: QTM: Trust Management with Quantified Stochastic Attributes
Author(s): Freudenthal, Eric; Karamcheti, Vijay
Abstract:
Trust management systems enable the construction of access-control infrastructures suitable for protecting sensitive resources from access by unauthorized agents. The state of the art in such systems (i) provide fail-safe in that access will be denied when authorizing credentials are revoked, (ii) can mitigate the risk of insider attacks using mechanisms for threshold authorization in which several independent partially trusted agents are required to co-sponsor sensitive activities, and (iii) are capable of enforcing intra- and inter- organizational access control policies.
Despite these advantages, trust management systems are limited in their ability to express partial trust. Additionally, they are cumbersome to administer when there are a large number of related access rights with differing trust (and thereby access) levels due to the need for explicit enumeration of the exponential number of agent combinations. More importantly, these systems have no provision for fault tolerance in cases where a primary authorization is lost (perhaps due to revocation), but others are available. Such situations may result in a cascading loss of access and possible interruption of service.
In this paper, we propose extending traditional trust management systems through a framework of reliability and confidence metrics. This framework naturally captures partial trust relationships, thereby reducing administrative complexity of access control systems with multiple related trust levels and increasing system availability in the presence of authorization faults while still maintaining equivalent safety properties.
-
Ph.D. Thesis
2003
Infrastructure Support for Accessing Network Services in Dynamic Network Environments
Fu, Xiaodong
Abstract
|
PDF
Title: Infrastructure Support for Accessing Network Services in Dynamic Network Environments
Candidate: Fu, Xiaodong
Advisor(s): Karamcheti, Vijay
Abstract:
Despite increases in network bandwidth, accessing network services across a wide area network still remains a challenging task. The difficulty mainly comes from the heterogeneous and constantly changing network environment, which usually causes undesirable user experience for network-oblivious applications.
A promising approach to address this is to provide network awareness in communication paths. While several such path-based infrastructures have been proposed, the network awareness provided by them is rather limited. Many challenging problems remain, in particular: (1) how to automatically create effective network paths whose performance is optimized for encountered network conditions, (2) how to dynamically reconfigure such paths when network conditions change, and (3) how to manage and distribute network resources among different paths and between different network regions. Furthermore, there is poor understanding of the benefits of using the path-based approach over other alternatives.
This dissertation describes solutions for these problems, built into a programmable network infrastructure called Composable Adaptive Network Services (CANS). The CANS infrastructure provides applications with network-aware communication paths that are automatically created and dynamically modified. CANS highlights four key mechanisms: (1) a high-level integrated type-based specification of components and network resources; (2) automatic path creation strategies; (3) system support for low-overhead path reconfiguration; and (4) distributed strategies for managing and allocating network resources.
We evaluate these mechanisms using experiments with typical applications running in the CANS infrastructure, and extensive simulation on a large scale network topology to compare with other alternatives. Experimental results validate the effectiveness of our approach, verifying that (1) the path-based approach provides the best and the most robust performance under a wide range of network configurations as compared to end-point or proxy-based alternatives; (2) automatic generation of network-aware paths is feasible and provides considerable performance advantages, requiring only minimal input from applications; (3) path reconfiguration strategies ensure continuous adaptation and provide desirable adaptation behaviors by using automatically generated paths; (4) both run-time overhead and reconfiguration time of CANS paths are negligible for most applications; (5) the resource management and allocation strategies allow effective setting up shared resource pools in the network and sharing resources among paths.
-
TR2003-843
2003
Why Path-Based Adaptation? Performance Implications of Different Adaptation Mechanisms for Network Content Delivery
Fu, Xiaodong;
Karamcheti, Vijay
Abstract
|
PDF
Title: Why Path-Based Adaptation? Performance Implications of Different Adaptation Mechanisms for Network Content Delivery
Author(s): Fu, Xiaodong; Karamcheti, Vijay
Abstract:
Because of heterogeneous and dynamic changing network environments, content delivery across the network requires system support for coping with different network conditions in order to provide satisfactory user experiences. Despite the existence of many adaptation frameworks, the question that which adaptation approach performs the best under what network configurations still remains unanswered. The performance implication of different adaptation approaches (end-point, proxy-based and path-based approaches) has not been studied yet. This paper aims to address this shortcoming by conducting a series simulation-based experiments to compare performance among these adaptation approaches under different network configurations. In order to make a fair comparison, in this paper approach-neutral strategies are proposed for constructing communication paths and managing network resources. The experiment results show that there are well-defined network environments under which each of these approaches delivers its best performance; and among them, the path-based approach, which uses the whole communication path to do adaptation, provides the best and the most robust performance under different network configurations, and for different types of servers and clients.
-
TR2003-847
2003
Balancing Neumann-Neumann Preconditioners for the Mixed Formulation of Almost-Incompressible Linear Elasticity
Goldfeld, Paulo
Abstract
|
PDF
Title: Balancing Neumann-Neumann Preconditioners for the Mixed Formulation of Almost-Incompressible Linear Elasticity
Author(s): Goldfeld, Paulo
Abstract:
Balancing Neumann-Neumann methods are extended to the equations arising from the mixed formulation of almost-incompressible linear elasticity problems discretized with discontinuous-pressure finite elements. This family of domain decomposition algorithms has previously been shown to be effective for large finite element approximations of positive definite elliptic problems. Our methods are proved to be scalable and to depend weakly on the size of the local problems. Our work is an extension of previous work by Pavarino and Widlund on BNN methods for Stokes equation.
Our iterative substructuring methods are based on the partition of the unknowns into interior ones - including interior displacements and pressures with zero average on every subdomain - and interface ones - displacements on the geometric interface and constant-by-subdomain pressures. The restriction of the problem to the interior degrees of freedom is then a collection of decoupled local problems that are well-posed even in the incompressible limit. The interior variables are eliminated and a hybrid preconditioner of BNN type is designed for the Schur complement problem. The iterates are restricted to a benign subspace, on which the preconditioned operator is positive definite, allowing for the use of conjugate gradient methods.
A complete convergence analysis of the method is presented for the constant coefficient case. The algorithm is extended to handle discontinuous coefficients, but a full analysis is not provided. Extensions of the algorithm and of the analysis are also presented for problems combining pure-displacement and mixed finite elements in different subregions. An algorithm is also proposed for problems with continuous discrete pressure spaces.
All the algorithms discussed have been implemented in parallel codes that have been successfully tested on large sample problems on large parallel computers; results are presented and discussed. Implementations issues are also discussed, including a version of our main algorithm that does not require the solution of any auxiliary saddle-point problem since all subproblems of the preconditioner can be reduced to solving symmetric positive definite linear systems.
-
Ph.D. Thesis
2003
Enriched Content: Concept, Architecture, Implementation, and Applications
Hung-Hsien, Chang
Abstract
|
PDF
Title: Enriched Content: Concept, Architecture, Implementation, and Applications
Candidate: Hung-Hsien, Chang
Advisor(s): Perlin, Ken
Abstract:
Since the debut of the World Wide Web, Web users have been facing the following problems:
[Extended Semantics]
When we read or study a digital document that we wish to explore further, typically, we interrupt our work to start a search. It costs time.[Reverse Hyperlink]
When we visit a web page, we might be curious about what other hyperlinks point to the visited page. These links would most likely be of related interest. Can we get ``real time'' information about what other pages are pointing to this page?[Version Control]
Many of us have been frustrated and even annoyed when the hyperlink that we follow gives us a ``404 not found'' or the retrieved webpage content is entirely different from the one we have bookmarked. Could we also have access to the past versions even if the hyperlink has been removed or the content has been changed?[Composition Assistant]
Writing is not an easy task. We labor to structure a body of text, sort out ideas, find materials, and digest information. We would like an automated service that can associate the content we have produced with other contexts(on the Web) and bring these web contexts to us for reference.In this thesis, we provide a unified framework and architecture, named enriched content, to resolve the above problems. We apply the architecture and show how the enriched content can be used in each application. We demonstrate that this method can be a new way of writing add-on functions for various document applications without having to write individual plug-in for each application or re-write each application. We also briefly discuss possible future development.
-
TR2003-836
2003
AQuery: Query Language for Ordered Data, Optimization Techniques, and Experiments
Lerner, Alberto;
Shasha, Dennis
Abstract
|
PDF
Title: AQuery: Query Language for Ordered Data, Optimization Techniques, and Experiments
Author(s): Lerner, Alberto; Shasha, Dennis
Abstract:
An order-dependent query is one whose result (interpreted as a multi-set) changes if the order of the input records is changed. In a stock-quotes database, for instance, retrieving all quotes concerning a given stock for a given day does not depend on order, because the collection of quotes does not depend on order. By contrast, finding the five price moving average in a trade table gives a result that depends on the order of the table. Query languages based on the relational data model can handle order-dependent queries only through add-ons. SQL:1999, for example, permits the use of a data ordering mechanism called a "window" in limited parts of a query. As a result, order-dependent queries become difficult to write in those languages and optimization techniques for these features, applied as pre- or post-enumerating phases, are generally crude. The goal of this paper is to show that when order is a property of the underlying data model and algebra, writing order-dependent queries in a language can be as natural as is their optimization. We introduce AQuery, an SQL-like query language and algebra that has from-the-ground-up support for order. We also present a framework for optimization of the order-dependent queries catagories it expresses. The framework is able to take advantage of the large body of query transformations on relational systems while incorporating new ones described here. We show by experiment that the resulting system is orders of magnitude faster than current SQL:1999 systems on many natural order-dependent queries.
-
TR2003-841
2003
Secure Untrusted Data Repository (SUNDR)
Li, Jinyuan;
Krohn, Maxwell; Mazieres, David; Shasha, Dennis
Abstract
|
PDF
Title: Secure Untrusted Data Repository (SUNDR)
Author(s): Li, Jinyuan; Krohn, Maxwell; Mazieres, David; Shasha, Dennis
Abstract:
We have implemented a secure network file system called SUNDR that guarantees the integrity of data even when malicious parties control the server. SUNDR splits storage functionality between two untrusted components, a block store and a consistency server. The block store holds all file data and most metadata. Without interpreting metadata, it presents a simple interface for clients to store variable-sized data blocks and later retrieve them by cryptographic hash.
The consistency server implements a novel protocol that guarantees close-to-open consistency whenever users see each other's updates. The protocol roughly consists of users exchanging version-stamped digital signatures of block server metadata, though a number of subtleties arise in efficiently supporting concurrent clients and group-writable files. We have proven the protocol's security under basic cryptographic assumptions. Without somehow producing signed messages valid under a user's (or the superuser's) public key, an attacker cannot tamper with a user's files---even given control of the servers and network. Despite this guarantee, SUNDR performs within a reasonable factor of existing insecure network file systems.
-
Ph.D. Thesis
2003
A framework for optimistic program optimization
Pechtchanski, Igor
Abstract
|
PDF
Title: A framework for optimistic program optimization
Candidate: Pechtchanski, Igor
Advisor(s): Goldberg, Benjamin
Abstract:
The problem of program optimization is a non-trivial one. Compilers do a fair job, but can't always deliver the best performance. The expressibility of general-purpose languages is limited, not allowing programmers to describe expected run time behavior, for example, and some programs are thus more amenable to optimization than others, depending on what the compiler expects to see. We present a generic framework that allows addressing this problem in two ways: through specifying verifiable source annotations to guide compiler analyses, and through optimistically using some assumptions and analysis results for the subset of the program seen so far. Two novel applications are presented, one for each of the above approaches: a dynamic optimistic interprocedural type analysis algorithm, and a mechanism for specifying immutability assertions. Both applications result in measurable speedups, demonstrating the feasibility of each approach.
-
TR2003-840
2003
Robust Model-Free Tracking of Non-Rigid Shape
Torresani, Lorenzo;
Hertzmann, Aaron; Bregler, Christoph
Abstract
|
PDF
Title: Robust Model-Free Tracking of Non-Rigid Shape
Author(s): Torresani, Lorenzo; Hertzmann, Aaron; Bregler, Christoph
Abstract:
We present a robust algorithm for estimating non-rigid motion in video sequences. We build on recent methods for tracking video by enforcing global structure (such as rank constraints) on the tracking. These methods assume color constancy in the neighborhood of each tracked feature, an assumption that is violated by occlusions, deformations, lighting changes, and other effects. Our method identifies outliers while solving for flow. This allows us to obtain high-quality tracking from difficult sequences, even when there is no single "reference frame" in which all tracks are visible.
-
Ph.D. Thesis
2003
Secure and Robust Censorship-Resistant Publishing Systems
Waldman, Marc
Abstract
|
PDF
Title: Secure and Robust Censorship-Resistant Publishing Systems
Candidate: Waldman, Marc
Advisor(s): Mazieres, David
Abstract:
In many cases, censoring documents on the Internet is a fairly simple task. Almost any published document can be traced back to a specific host, and from there to an individual responsible for the material. Someone wishing to censor this material can use the courts, threats, or other means of intimidation to compel the relevant parties to delete the material or remove the host from the network. Even if these methods prove unsuccessful, various denial of service attacks can be launched against a host to make the document difficult or impossible to retrieve. Unless a host's operator has a strong interest in preserving a particular document, removing it is often the easiest course of action.
A censorship-resistant publishing system allows an individual to publish a document in such a way that it is difficult, if not impossible, for an adversary to completely remove, or convincingly alter, a published document. One useful technique for ensuring document availability is to replicate the document widely on servers located throughout the world. However, replication alone does not block censorship. Replicas need to be protected from accidental or malicious corruption. In addition, a censorship-resistant publishing system needs to address a number of other important issues, including protecting the publisher's identity while simultaneously preventing storage flooding attacks by anonymous users.
This dissertation presents the design and implementation of two very different censorship-resistant publishing systems. The first system, Publius, is a web based system that allows an individual to publish, update, delete and retrieve documents in a secure manner. Publius's main contributions include an automatic tamper checking mechanism, a method for updating or deleting anonymously published content and methods for publishing anonymously hyperlinked content. The second system, Tangler, is a peer-to-peer based system whose contributions include a unique publication mechanism and a dynamic self-policing network. The benefits of this new publication mechanism include the automatic replication of previously published content and an incentive to audit the reliability with which servers store content published by other people. In part through these incentives, the self-policing network identifies and ejects servers that exhibit faulty behavior.
-
TR2003-846
2003
Improved Link-Based Algorithms for Ranking Web Pages
Wang, Ziyang
Abstract
|
PDF
Title: Improved Link-Based Algorithms for Ranking Web Pages
Author(s): Wang, Ziyang
Abstract:
Several link-based algorithms, such as PageRank [19], HITS [15] and SALSA [16], have been developed to evaluate the popularity of web pages. These algorithms can be interpreted as computing the steady-state distribution of various Markov processes over web pages. The PageRank and HITS algorithms tend to over-rank tightly interlinked collections of pages, such as well-organized message boards. We show that this effect can be alleviated using a number of modications to the underlying Markov process. Specically, rather than weight all outlinks from a given page equally, greater weight is given to links between pages that are, in other respects, further off in the web, and less weight is given to links between pages that are nearby. We have experimented with a number of variants of this idea, using a number of different measures of ``distance'' in the Web, and a number different weighting schemes. We show that these revised algorithms often do avoid the over-ranking problem and give better overall rankings.
-
Ph.D. Thesis
2003
A Qualitative Profile-based Approach to Edge Detection
Yen, Ting-jen
Abstract
|
PDF
Title: A Qualitative Profile-based Approach to Edge Detection
Candidate: Yen, Ting-jen
Advisor(s): Yap, Chee
Abstract:
Edge detection is a fundamental problem of computer vision and has been widely investigated. We propose a new framework for edge detection based on edge profiles.
Our model, based on one-dimensional qualitative edge profile fitting and edge consistency, will produce one continuous edge from an initial seed point. A "profile" is defined as a finite cross-section of a two-dimensional image along a line segment. "Edge consistency" means that all the profiles on the same edge should be consistent.
Appropriate evaluation functions are needed for different types of edge profiles, such as step edges, ramp edges, etc. An evaluation function must meet the requirement that it will produce local minima at the positions where edges of a given type occurs in the profile. Instead of subjective thresholding, image noise is measured statistically and used as a systematic way of filtering false edges. We describe our method as "qualitative edge profile fitting" because it is not based on arbitrary thresolding. Once an edge point is localized, it can be extended into an edge by matching compatible profiles. Two profiles are considered compatible as long as their average di erence is within the noise measurement. Another feature of our approach is its subpixel accuracy. The utilization of profiles and noise-induced threshold selection make tasks such as joining broken edges more objective.
We develop the necessary algorithms and implement them. Different evaluation functions are constructed for different edge models and experimented on different one-dimensional profiles. The edge detector, using these evaluation functions, is then examined using different images and under different noise conditions.
-
TR2003-842
2003
A Distributed Adaptive Cache Update Algorithm for Dynamic Source Routing
Yu, Xin;
Kedem, Zvi M.
Abstract
|
PDF
Title: A Distributed Adaptive Cache Update Algorithm for Dynamic Source Routing
Author(s): Yu, Xin; Kedem, Zvi M.
Abstract:
On-demand routing protocols use route caches to make routing decisions. Due to frequent topology changes, cached routes easily become stale. To address the cache staleness issue in DSR (the Dynamic Source Routing protocol), prior work mainly used heuristics with ad hoc parameters to predict the lifetime of a link or a route. However, heuristics cannot accurately predict timeouts because topology changes are unpredictable. In this paper, we present a novel distributed cache update algorithm to make route caches adapt quickly to topology changes without using ad hoc parameters. We define a new cache structure called a cache table to maintain the information necessary for cache updates. When a node detects a link failure, our algorithm proactively notifies all reachable nodes that have cached the broken link in a distributed manner. We compare our algorithm with DSR with path caches and with Link-MaxLife through detailed simulations. We show that our algorithm significantly outperforms DSR with path caches and with Link-MaxLife.