No subject

Robert I. Soare soare at cs.uchicago.edu
Tue Jul 27 15:03:43 EDT 1999


Date: Wed, 21 Jul 1999 16:31:41 -0500
To: soare at cs.uchicago.edu (Robert I. Soare)
From: "Stuart A. Kurtz" <stuart at cs.uchicago.edu>


Dear Bob,

I don't have a reference for Homer-Maass close at hand.  I'm sure that
Steve can provide.  The KMR reference is

  S. Kurtz, S. Mahaney, J. Royer, "Collapsing Degrees."  Journal of
    Computer and System Sciences, 37:247-268, 1988.

Briefly, a collapsing degree is a polynomial time many-one degree that
consists of a single polynomial time isomorphism type.  Collapsing
degrees are interesting because this is precisely the behavior of the
NP-complete degree predicted by the Berman-Hartmanis Isomorphism Conjecture.
The following two statements are equivalent:

  The Berman-Hartmanis Isomorphism Conjecture: Any two NP-complete sets
    are polynomial-time isomorphic.

  The Berman-Hartmanis Isomorphism Conjecture (as restated by KMR):
    The polynomial time degree of the NP-complete sets collapses.

The issue addressed by [KMR88] was whether or not collapsing degrees
exist.  We showed (by a standard finite-extension argument) that
collapsing degrees do exist.  If I recall correctly, the finite extension
argument did not produce a computable example, although it would almost
certainly have been computable in 0'.

But noncomputable polynomial time degrees are outside of the domain of
interest of (theoretical) computer science.  By using a priority argument,
we were able to demonstrate that a collapsing degree exists in exponential
time -- a far more interesting and significant result, and the primary
technical contribution of [KMR88].

I think it is worth knowing that organizing real computations around a
dynamic collection of goals, each of which with a particular priority,
is a genuinely useful approach.

For example, a multi-tasking single-processor computer (the most common
kind in practice) often allocates processor time among tasks by viewing
the completion of each task as a goal, and associating with each a priority
based on how much processor-time that task has received, and the last time
at which that task received processor-time.  The analysis of various
scheduling algorithms (e.g., guaranteeing that each process eventually
receives adequate time to complete, even if other processes are
nonterminating) would be strangely familiar to a computability theorist.

Another, similar example is writing an application for a personal computer
(Mac or Windows).  These programs can be thought of as responding
to a sequence of user and system initiated events.  Architecturally, there
is an event queue.  Events are usually prioritized by time (it is more
important to handle old events than new), but some events (updates) should
be handled as soon as possible.

Indeed, a priority based programming architecture is so common and useful
that specialized data structures have been developed to support it, and are
a part of standard programming environments (e.g., the std::priority_queue
adaptor is part of C++'s Standard Template Library).  I do not know whether
or not there is an intellectual linkage between priority methods in
computability theory, and in programming practice; but I do know that the
use of priority methods in practice is real.

An objection you might make is that the examples I've describe are all
trivial -- there are priorities without injury.  But there is more going
on here than meets the eye!  In the case of an operating system, allocating
resources to one process may make them unavailable to other processes
until that task completes, and may even cause effort associated with one
process to be lost.  There is a substantial literature in CS that deals
with such problems.  In the case of an application, responding to a user
event may make the screen inconsistent with the state of the program, 
injuring the visual consistency requirement, and generating a new
(and high priority!) update event.  This is infinite injury!

The moral may be surprising, even to a computability theorist.  Priority
methods are routinely taught to freshmen computer scientists, and they are
a major organizing principle of many large programs, including the
applications and operating systems we encounter on a daily basis.

Best,

Stu
---------------
Stuart A. Kurtz
Professor and Chair
Department of Computer Science
The University of Chicago

web:   http://www.cs.uchicago.edu/~stuart
email: stuart at cs.uchicago.edu
phone: (773)-702-3493
fax:   (773)-702-8487




More information about the FOM mailing list