# [FOM] Computability question

Tucker J.V. J.V.Tucker at swansea.ac.uk
Sat Mar 30 08:36:56 EDT 2013

```Dan,

The whole of question of invariance is complicated, even in the case of computability.

1. I suggest you look at these papers that address invariance issues in general terms:

Peter van Emde Boas,  Machine Models and Simulations, in Handbook of Theoretical Computer Science A, Elsevier, 1990.

This discusses classes of machines that can simulate each other within a polynomially bounded overhead in time and a constant-factor overhead in space.

There is also this paper:

C. Slot and P. van Emde Boas, On tape versus core: an application of space efficient perfect hash functions to the invariance of space, STOC, December 1984.

2. I looked at the complexity problem for abstract programs (e.g., while programs, register machines etc) working on any universal algebra A, using norms ||-||: A -----> {0, 1, 2, ....}  on data and cost functions for calling basic operations of the algebra:

Peter R. J. Asveld, J. V. Tucker: "Complexity Theory and the Operational Structure of Algebraic Programming Systems". Acta Informatica. 17 (1982) 451-476.

3. Computability theory can be generalised to abstract algebras, where several important distinct classes of computable functions arise depending upon the model. Invariance fragments a little though some models are clearly more basic than others, see:

J. V. Tucker and J. I. Zucker,  Computable functions and semicomputable sets on many sorted algebras, in S. Abramsky, D. Gabbay and T Maibaum (eds.), Handbook of Logic for Computer Science. Volume V Logic and Algebraic Methods, 2000, Oxford University Press, 317-523.

The basic model is represented by (say) while programs + unbounded arrays, for which crucial theorems like universality can be proved.

4. If you look at computability on the reals then invariance becomes very fragmented for this classic algebra. (The situation is common for topological algebras.)  The problem has been recently studied in:

Jens Blanck, Interval Domains and Computable Sequences: A Case Study of Domain Reductions, The Computer Journal 56 (2013)  (1): 45-52.

5. If you look at computational models equipped with simple physical systems as oracles you get a variety of classes that break the Church-Turing barrier, e.g. Turing machines even under the restriction of polynomial time can give you P/poly or P/log*.

This scratches the surface of some current and not so current problems.

John

=====================================================================

From: fom-bounces at cs.nyu.edu [fom-bounces at cs.nyu.edu] on behalf of Kreinovich, Vladik [vladik at utep.edu]
Sent: Friday, March 29, 2013 10:29 PM
To: fom at cs.nyu.edu
Subject: Re: [FOM] Computability question

While there are many theorems of equivalence of polynomial time class on different computational devices (different types of Turing machines, Kolmogorov-Uspensky algorithms, etc.), the situation with recursive functions is different: they are equivalent from the viewpoint of what is computable, but they are clearly not equivalent when it comes to comparing the number of elementary operations. For example, a standard definition of addition as a recursive function is, in normal mathematical notations, add(n,0)=n and add(n,m+1)= add(n,m)+1. This means that to compute n+m, we need to perform m successor operations (and no fewer is possible, since each operation increases by 1). For m = 2^k, a number represented by k+1 digits, this means that we need the number of computational steps which is exponential in terms of the length of the input.

-----Original Message-----
From: fom-bounces at cs.nyu.edu [mailto:fom-bounces at cs.nyu.edu] On Behalf Of Daniel Schwartz
Sent: Thursday, March 28, 2013 8:01 PM
To: fom at cs.nyu.edu
Subject: [FOM] Computability question

Dear FOM list,

I am new to this list and so don't know if this issue has previously been discussed, but I have a question:

Is the property of polynomial time computability invariant among the
various models of computation?

In particular, suppose we define the complexity of a recursive function definition as the number of applications of elementary functions (successor, zero, projection) required to evaluate the definition for a given argument.
Then a question is: Is there a one-to-one correspondence between polynomial time recursive function definitions and polynomial time Turing machines?

I haven't been able to find any indication that anyone has worked on this problem.  Would you know of any, and of any results along these lines?

Thanks and best regards,

--Dan Schwartz

************************************************************************
Dr. Daniel G. Schwartz                            Office    850-644-5875
Dept. of Computer Science, MC 4530                CS Dept   850-644-4029
Florida State University                          Fax       850-644-0058
Tallahassee, FL 32306-4530                        schwartz at cs.fsu.edu
U.S.A.                                   http://www.cs.fsu.edu/~schwartz
************************************************************************
_______________________________________________
FOM mailing list
FOM at cs.nyu.edu
http://www.cs.nyu.edu/mailman/listinfo/fom
_______________________________________________
FOM mailing list
FOM at cs.nyu.edu
http://www.cs.nyu.edu/mailman/listinfo/fom
```