Bob Paige's research aims to improve the production rate and the
reliability of high performance implementations of nonnumerical algorithms.
The approach utilizes program transformations that capture broad
algorithm design principles. These transformations are evaluated
prior to an implementation by testing whether they can be used effectively
both to explain complex algorithms, and also to help design new algorithms.
The implementation methodology makes use of conditional rewriting
together with logic based program analysis. The program development
methodology is evaluated by productivity experiments.
Bloom, B. and Paige, R., " Transformational Design and Implementation
Of A New Efficient Solution To The Ready Simulation Problem,"
Science of Computer Programming, 24(3), pp. 189-220, 1995.
A transformational methodology is described for
simultaneously designing algorithms and developing
programs. The methodology makes use of three
transformational tools - dominated
convergence, finite differencing, and real-time simulation of a set machine
on a RAM. We illustrate the methodology to design a new
for deciding when n-state, m-transition processes are ready
similar; a substantial improvement on the Theta(mn**6) algorithm
presented in Bloom's Thesis. The methodology is also used to
derive an executable prototype program,
and a program whose performance, we
believe, is competitive with
the most efficient hand-crafted implementation of our algorithm.
Ready simulation is the finest fully abstract notion of process
equivalence in the CCS setting.
- Bouma, W., J., Fudos, I., Hoffmann, C., Cai, J., and Paige, R.,
" Geometric Constraint Solver," , Computer-Aided Design,
Vol. 27, No. 6, pp. 487-501, June 1995.
We report on the development of a two-dimensional geometric constraint
solver. The solver is a major component of a new generation of CAD
systems that we are developing based on a high-level geometry
representation. The solver uses a graph-reduction directed algebraic
approach and achieves interactive speed. We describe the architecture
of the solver and its basic capabilities. Then, we discuss in detail
how to extend the scope of the solver, with special emphasis placed on
the theoretical and human factors involved in finding a solution - in
an exponentially large search space - so that the solution is
appropriate to the application and the way of finding it is intuitive
to an untrained user.
- Cai, J., Facon, P., Henglein, F., Paige, R., and Schonberg, E.,
" Type Analysis and Data Structure Selection," in Constructing
Programs From Specifications, ed. B. Moeller, North-Holland, 1991, pp.
126 - 164.
A formal development of data structure selection in a set-oriented
language is described. The approach uses a type/subtype system
to characterize conditions under which primitive set operations
such as associative access can be simulated on a RAM in real-time.
- Cai, J. and Paige, R., " Program Derivation By Fixed Point
Computation," SCP, 11 (1988/89), pp. 197 - 261.
A transformational paradigm by which nonumerical algorithms are treated
as fixed point computations is developed.
- Cai, J. and Paige, R., " Using Multiset Discrimination To
Solve Language Processing Problems Without Hashing,"
TCS, 145:1-2, July 1995, pp. 189-228.
It is shown how the programming technique of multiset discrimination
can be used to solve a wide variety of programming language problems in
worst case time and space that either matches or is better than the expected
time (under the assumption that hashing takes unit expected time) of the
best previous algorithm. An algorithm is presented that solves iterated
strength reduction together with useless cose elimination in worst case time
linear in the maximum length of the unoptimized and optimized programs.
This is three orders of magnitude better than the expected time of the
previous best algorithm due to Cocke and Kennedy.
- Cai, J. and Paige, R., " Towards Increased Productivity of
Algorithm Implementation," Proc. ACM SIGSOFT 1993, pp. 71 - 78,
in Soft. Eng. Notes, Vol. 18, Num. 5, Dec. 1993.
This paper reports experimental results that support the feasibility
of a new transformational approach developed by the authors for
implementing complex algorithms correctly and efficiently. The class
of algorithms amenable to our methods includes nonnumerical graph
algorithms. Experiments were performed to measure how this approach
affects productivity (in terms of the number of source lines in the
implementation divided by manual programming time) and running times.
Comparative benchmarks showed that productivity can be increased over
a conventional ad hoc approach by factors ranging from 5.1 to 9.9.
Preliminary results also showed that the running time of C code
produced by this new approach can be as fast as 1.5 times that of
tightly coded high quality Fortran.
- Cai, J., Paige, R., and Tarjan, R., " More Efficient Bottom-Up
Multi-Pattern Matching In Trees," Theoretical Computer Science, Vol 106,
Num 1, Nov. 1992, pp. 21-60.
A new preprocessing
algorithm is described for bottom-up linear multi-pattern matching
in trees. Even though it is incremental with respect to additions
and deletions of patterns, the algorithm is theoretically and computationally
more efficient than Chase's batch algorithm. Our preprocessing also
adapts to favorable input instances, and on Hoffmann and O'Donnell's
class of Simple Patterns, it is more efficient than their special
purpose algorithm tailored to this class. This algorithm is implemented
in the APTS transformational system.
- Chang, C. and Paige, R., " From Regular Expressions to DFA's
Using Compressed NFA's," , TCS, vol 178(1-2), May, 1997, pp. 1-36.
Transformational techniques are used to derive a new O(r) time
O(s) space algorithm
for turning a regular expression R of length r
into an O(s) space
representation of McNaughton and Yamada's NFA, where
s is the number of occurrences of alphabet symbols in R, and
s+1 is the number of NFA states.
The standard adjacency list representation of McNaughton and Yamada's
NFA takes up 3s + s**2 space in the worst case.
The adjacency list representation of the NFA produced by Thompson
takes up between 2r and 6r space,
where r can be arbitrarily larger than s.
Given any set V of NFA states, our representation can be used to
compute the set U of states one transition away from the states
in V in optimal time O(|V| + |U|). McNaughton and Yamada's NFA
requires Theta(|V| |U|) time in the worst case. Using Thompson's NFA,
the equivalent calculation requires Theta(r) time in the worst case.
- Goyal, D. and Paige, R., " The Formal Reconstruction and Improvement Of
The Linear Time Fragment Of Willard's Relational Calculus Subset,"
in Algorithmic Languages and Calculi, R. Bird and L. Meertens (eds.), pp.
382 - 414, Chapman and Hall, 1997.
We consider the fragment of Willard's Relational Calulus Subset (JCSS
96) that he compiles into code that runs in linear time and space
under the assumption that hashing unit space data takes unit time. We
show how to translate this fragment into SETL-like code that runs in
linear time and space on our set theoretic complexity model of
computation. By also showing that this SETL-like code is typable
in a type system that supports realtime-sumulation of set theoretic
complexity on a pointer RAM, we speedup Willard's queries from linear
expected time to linear worst case time. This is the first time that
type theory has been used as a general tool for algorithmic speedup.
- Goyal, D. and Paige, R., " A New Solution To The Hidden Copy
in Proc. SAS 98, LNCS, 1998.
- Keller, J. P. and Paige, R., " Program Derivation With Verified
Transformations - A Case Study,"
Comm. Pure and Applied Math., Vol 48, Num. 9-10, 1996.
A program development methodology based on verified program transformations
is described and illustrated through derivations of a high level
bisimulation algorithm and an improved minimum-state DFA algorithm.
- Paige, R., " Real-Time Simulation of A Set Machine on a RAM,"
ICCI 89, May 1989, in Computing and Information, Vol. II, eds. R. Janicki
and W. Koczkodaj, Canadian Scholars Press, pp. 69-73.
Sufficient semantic conditions are given for when an abstract set-oriented
machine (that can, for example, perform associative access in unit worst
case time) can be simulated on a pointer RAM or an array RAM in real time.
- Paige, R., " Efficient Translation of External Input in a
Dynamically Typed Language," Proc. IFIP Congress 94, in IFIP
Transaction A-51, Vol 1, eds. B. Pehrson and I. Simon, North-Holland,
Sept 1994, pp. 603-608.
We consider programming language features and algorithms supporting
the translation of external data into initial data structures for high
level datatypes. Let I be a language of external constants formed
from atomic constants and from set, multiset, and tuple constructors.
Let L be a language denoting data structures used to store values of
I. We show how to read an input string C, decide whether it belongs
to I, convert it to internal form, and build initial L data structures
storing the internal value of C in linear time with respect to the
number of symbols in C.
- Paige, R., " Atlantique Research Overview," Proc. Atlantique
Workshop on Semantics Based Program Manipulation, eds. N. Jones and
C. Talcott, DIKU Report 94/12, 1994.
This is an extended research summary.
- Paige, R., " Viewing a program transformation system at work,"
Joint 6th Intl. Conf. on Programming Language Implementation and Logic
Programming (PLILP) and 4th Intl. Conf. on Algebraic and Logic
Programming (ALP), LNCS 844, eds. M. Hermenegildo and J. Penjam,
Springer-Verlag, Sep. 1994, pp. 5-24.
This paper discusses an approach to program development by fully automatic,
generic program transformations that capture algorithm design principles.
Some of the ideas underlying the transformational methodology are
illustrated through explanatory examples of the APTS program transformation
system at work.
- Paige, R., " Future Directions in Program Transformations,"
Computing Surveys, 28A(4), Dec. 1996.
This paper was presented for the Programming Language Working
Group within the ACM Workshop on Strategic Directions in
Paige, R. and Yang, Z., " High Level Reading and Data Structure
Compilation," Proc. 24th ACM POPL, Jan. 1997, pp. 456-469.
This paper describes the semantic underpinnings and algorithmic
solution to the problem of translating high level input in string
form into internal data structures. Of central importance is a type
system used to interpret input strings and to model physical structure.
A fundamental characteristic of the physical structure is to support
realtime simulation of associative access (e.g. membership testing)
on a RAM. The reading algorithm runs in linear time in the length
of the input string, and is fully written in terms of the type