DOCTORAL DISSERTATION DEFENSE
Candidate: Thanasis Mitsolides
Advisor: Malcolm Harrison

The Design and Implementation of ALLOY, a Higher Level Parallel Programming Language
1:00 p.m., Wednesday, July 3, 1991
12th fl. conference room
719 Broadway





Abstract

The goal of this thesis is to show that it is possible to define a parallel higher level programming language for programming in the large which will be able to easily express both complicated parallel problems and traditional serial ones. Such a language would provide many good features of serial and parallel programming languages and be appropriate for programming massively parallel computing systems. To demonstrate this a simple language, called ALLOY, was designed. The main features of this language, could be incorporated into other languages.

ALLOY, directly supports functional, object oriented and logic programming styles in a unified and controlled framework. Evaluating modes support serial or parallel execution, eager or lazy evaluation, non-determinism or multiple solutions. These modes can be combined freely. ALLOY is simple, utilizing only 29 primitives, half of which are for object oriented programming.

The power of ALLOY is demonstrated through the use of a wide variety of examples. Some of the examples are: a) partition sort and FP library demonstrating clarity, efficiency, and simple parallelism, b) prime numbers and buffering demonstrating the ability to select between eager and lazy evaluation, c) systolic sort and merge sort demonstrating dynamic networks of communicating processes, d) N-queens and list permutations demonstrating serial and parallel searching. A library is given for programming in logic programming styles. Finally a number of parallel objects demonstrate ALLOY's ability to exploit massively parallel architectures effectively.

An interpreter of ALLOY together with a number of utilities and a programming environment has been written in Common Lisp. The system is available for anonymous ftp. It is shown that ALLOY can have reasonably efficient implementation on shared memory multiprocessor (MIMD) systems supporting highly parallel operations, on distributed architectures, and possibly on Data Flow architectures as well.