[FOM] The logical consequences of simplifying assumptions
mathlogic at chalisque.com
Sun Mar 24 11:07:36 EDT 2013
I'll try to be brief here.
This was partially motivated by the title of Martin Davis's recent post entitled: 'Should mathematicians be explicit about what they are assuming?'
Consider the typical development of a theory of, say, physics or economics. An assumption is made such as, for small x, sin x=x. Then we have the situation that, without this assumption we can usually divide by sin x-x, yet under this assumption we never can. What happens when someone relies on results from two separate theoretical development which make different, and subtly incompatible, simplifying assumptions?
I'll elaborate. Essentially, this arises from a situation that occurs whenever one is working with a structure in which there is an operation which is nearly always invertible, and one makes some kind of simplifying assumption. The classical case is multiplication of numbers, wherein we can reverse the operation of multiplying by a if, and only if, a is nonzero, and in particular, numbers arbitrarily close to zero all have multiplicative inverses.
Suppose that some practical user of mathematical methods (say a physicist or an economist) makes a simplifying assumption in the development of some theory, that a=b when it is the case that a-b is small enough to seem negligible. If we identify a with b, then we effectively unidentify b with any other quantity that it was originally equal to, say c. Without this assumption, the result of 1/(a-b) is defined and is a finite quantity, yet 1/(b-c) is not defined. With this assumption, it is the other way around, viz. 1/(a-b) is now undefined and 1/(b-c) is defined and finite.
What this means, in practice, is that any dependence of a result on the assumption that b-c is nonzero will render a result (say T1) incompatible with theories which do not take on this assumption (say, T2, since the combination of the two theories effectively gives us an expanded theory which entails both a=c and a!=c). Thus the form of argument which runs 'T1 and T2 are well known, so by T1 we get X, and by T2 it then follows Y, and thus we get...' is invalid, but it may well not be obvious to those reading such arguments that things do not follow.
As such, when looking at mathematical models of things such as physical systems or economic behaviours, we need to be aware of all simplifying assumptions that results about those models rely upon if we are to be able to reliably use results from multiple sources. (These thoughts arose as I tried to read a typical textbook on economics and started to chase through the implications of the assumptions they were making.)
I was wondering whether anybody on this list knew of known theoretical results developed around such ideas? Also, I am wondering whether, from a more involved development of such ideas, a concrete case can, and possibly should, be made for mathematicians, scientists, economists and suchlike to be more explicit about assumptions that their reasoning depends on.
More information about the FOM