Programming Languages

G22.2110 Spring 1998


Class #12

Logic Programming Languages

Concurrent Programming



Read Sethi’s Chapters 11, 12, handouts and references




C12.0. Current Assignments, PL Project #4

C12.1. Logic Programming Languages

C12.2. Concurrent Programming

C12.3. Functional Programming with SML (Ed Osinsky)

C12.4. Comments on Sethi's Chapters 11, 12


C12.0. Current Assignments, PL Project #4


Missing homeworks, incompletes


Homework #11 due today


PL Project – Part IV due May 4


C12.1. Logic Programming Languages


C12.1.1. Use of Numeric Computation in Prolog


Suppose we know the average speeds of several automobiles on a particular racetrack and the amount of time they are on the track.

This basic information can be coded as facts, and the relationship between speed, time, and distance can be written as a rule.


speed (ford, 100) .

speed (chevy, 105) .

speed (dodge, 95) .

speed (volvo, 80) .

time (ford, 20) .

time (chevy, 21) .

time (dodge, 24) .

time (volvo, 24) .

distance (X, Y) :- speed (X, Speed) ,

time (X, Time) ,

Y is Speed * Time .


Queries can request the distance traveled by a particular car.


distance (chevy, Chevy_Distance) .

would instantiate Chevy_Distance with the value 2205.



distance (chevy, Chevy_Distance) .


(1) 1 Call : distance (chevy, _0) ?

(2) 2 Call : speed (chevy, _5) ?

(2) 2 Exit : speed (chevy, 105)

(3) 2 Call : time (chevy, _6) ?

(3) 2 Exit : time (chevy, 21)

(4) 2 Call : _0 is 105*21 ?

(4) 2 Exit : 2205 is 105 * 21

(1) 1 Exit : distance (chevy, 2205)


C12.1.2. Appending to a list in Prolog


append ( [ ] , List , List ) .

append ( [ Head | List_1 ] , List_2 , [ Head | List_3 ] ) :- append ( List_1, List_2, List_3 ) .


trace .

append ( [ bob, jo ] , [ jake, darcie ] , Family ) .


(1) 1 Call : append ( [ bob, jo ] , [ jake, darcie ] , _10 ) ?

(2) 2 Call : append ( [ jo ] , [ jake , darcie ] , _18 ) ?

(3) 3 Call : append ( [ ] , [ jake , darcie ] , _25 ) ?

(3) 3 Exit : append ( [ ] , [ jake , darcie ] , [ jake , darcie])

(2) 2 Exit : append ( [ jo ] , [ jake , darcie ] ,

[ jo , jake , darcie ] )

(1) 1 Exit : append ( [ bob , jo ] , [ jake , darcie ] ,

[ bob , jo , jake , darcie ] )

Family = [ bob , jo , jake , darcie ]



C12.2. Concurrent Programming


C12.2.1. Introduction


Concurrency: instruction level, statement level, unit level, program level.


Only statement and unit levels involve language issues.


Symmetric subprograms: unit returns control to the caller after partial execution. The execution can be restarted later from the point where it relinquished control.


Concurrent subprograms: more than one program unit execute at the same time (physically on separate processors, or logically in some time-sliced fashion on a single processor computer system).


Statement-level concurrency: largely a matter of specifying how data should be distributed over multiple memories and which statements can be executed concurrently.


C12.2.1.1. Multiprocessor Architectures


Single-Instruction Multiple-Data (SIMD): multiple processors execute the same instruction simultaneously, each on different data. Each processor has its own local memory. One processor is used as the controller for the other processors. No synchronization required in software.


Multiple-Instruction Multiple-Data (MIMD): processors operate independently but can be synchronized. Both distributed (each processor has its own memory) or shared memory systems.

Synchronization required. Support unit-level concurrency.


C12.2.1.2. Categories of Concurrency


Quasi-concurrency: execution of symmetric units in which a number of coroutines can cooperate to intertwine their execution sequences. Supports a single thread of control.


Physical concurrency: several program units from the same program execute simultaneously. Supports multiple threads of control.


Logical concurrency: actual execution of programs takes place in interleaved fashion on a single processor.


C12.2.2. Unit-level concurrency


C12.2.2.1. Fundamental Concepts



Program unit that can be in concurrent execution with other program units.

May be implicitly started (at the difference of program units).

Can provide one thread of control in a program.

Communicate with other tasks through share nonlocal variables, through message passing, or through parameters.

A task can be disjoint.



Mechanism that controls the order in which tasks execute. Either cooperation synchronization (a task must wait for another to to continue), or competition synchronization (both tasks require the use of some resource that cannot be simultaneously used).

Three methods of providing for mutually exclusive access to a shared resource: semaphores, monitors, and message passing.

Both language and program design must ensure task liveness, and avoid deadlocks.


C12.2.2.2. Coroutines (SIMULA 67, BLISS, INTERLISP, MODULA 2)


A coroutine is a program that has multiple entry points, which are controlled by the coroutine itself, and the means to maintain its status between activations.

Coroutines must be history sensitive, and thus have static local variables.

Only one coroutine actually executes at a given time.


e.g., Modula-2

No statement, data types, or operators for either symmetric or concurrent control units.

Rather modules that provide sufficient facilities for building and using coroutines are provided (instead of enlarging the language).


EXPORT SIGNAL, StartProcess, SEND, WAIT, Awaited, Init;


PROCEDURE StartProcess (p : PROC; n : CARDINAL);





END Processes

SIGNAL type object consists of an integer counter and a queue for storing process descriptors.


WAIT (signal1)

IF signal1's counter > 0 THEN

decrement signal1's counter


put the caller in signal1's queue

attempt to transfer control to some ready process

(if the ready queue is empty, deadlock occurs)



SEND (signal1)

IF signal1's queue is empty {no process is waiting} THEN

increment signal1's counter


put the calling process in the ready queue

transfer control to a process from signal1's queue



C12.2.2.3. Language Design for Concurrency


Design Issues


How is competition synchronization provided ?

How is cooperation synchronization provided ?

How and when do tasks begin and end execution ?

Are tasks statically or dynamically created ?




Provides competition synchronization through mutually exclusive access to shared data structures.

Idea is to place a "guard" around the code that accesses the shared data structure.


e.g., Competition and cooperation synchronization for a concurrently accessed shared buffer.

semaphore access, fullspots, emptyspots;

access.count := 1;

fullspots.count := 0;

emptyspots.count := BUFLEN;

process producer;


-- produce VALUE --

wait (emptyspots); { wait for a space }

wait (access); { wait for access }


release (access); { relinquish access }

release (fullspots); { increase filled spaces }

end loop;

end producer;


process consumer;


wait (fullspots); { make sure it is not empty }

wait (access); { wait for access }


release (access); { relinquish access }

release (emptyspots); { increase empty spaces }

-- consume VALUE --

end loop

end consumer;




Idea: encapsulate shared data structures with their operations and hide their representations (make shared data structures ADTs).


Monitors: program units that gather all synchronization operations on shared data.


First PLs to incorporate monitors: Concurrent Pascal, Modula, CSP/k, Mesa.


Concurrent Pascal is Pascal + classes from SIMULA67, processes, and monitors.


Concurrent Pascal monitors are ADTs for shared data resources:

type monitor_name = monitor(formal parameters)

-- declarations of shared variables --

-- definitions of local procedures --

-- definitions of exported procedures --

-- initialization code --



Concurrency Through Message Passing


Monitors cannot be used effectively for synchronization in a distributed system where each processor has its own memory, rather than a single shared memory.


e.g., Ada 83's RendezVous (synchronous message passing)


Concurrency in Ada 95



Protected objects: provide more convenient access control for shared data.

Method of providing asynchronous task communication.


C12.2.3. Statement-level concurrency


e.g., High Performance Fortran (HPF, 1993)

Extensions to FORTRAN 90

HPF includes specification statements (number of processors, distribution of data over the memories of those processors, alignment of data with other data in terms of memory placement), and intrinsic, or built-in subprograms

!HPF$ PROCESSORS procs (n)

!HPF$ DISTRIBUTE (kind) ONTO procs :: identifier_list

kind is CYCLIC, or BLOCK

!HPF$ ALIGN array1_element WITH array2_element



REAL list_1 (1000), list_2 (1000)

INTEGER list_3 (500), list_4 (501)

!HPF$ PROCESSORS proc (10)

!HPF$ DISTRIBUTE (BLOCK) ONTO procs :: list_1, list_2

!HPF$ ALIGN list_3 (index) WITH list_4 (index+1)

list_1 (index) = list_2 (index)

list_3 (index) = list_4 (index+1)


FORALL (index = 1:1000) list_1 (index) = list_2 (index)

(specifies a collection of statements that may be executed concurrently)


C12.3. Functional Programming with SML


Ed Osinski will speak about SML and Project #4.


C12.4. Comments on Sethi's Chapters 11, 12