FOM: expr. power of syntax

Ben Abraham ben at cpw.math.columbia.edu
Wed Jul 24 16:46:30 EDT 2002


this is a bit off the cuff ...

yes i also think the assembler, machine language digression is a bit mute.
remember at the programming language level there is an analog of a wff,
namely a syntacticly correct program, a compiler is an acceptor
(context-free). of course at the machine level you can feed arbitrary bits
to a computer, but they won't do anything. what we are interested in are
the "interesting" machine level programs, for instance the ones that
compute the propositional wff's (or syntactic high-level programs). the
counter argument seems a bit like saying everything is regular because the
kleene* contains all possible strings. we must consider interesting
subsets of the set of all possible strings over the alphabet (compilers,
assemblers do this). in view of the current debate we would be interested
in a specific subset of possible machine programs that computes the wff's.

getting back to the original question, i still think it's a question of
representation, the limitation's of regular language and the pumping lemma
will do most of the work. (of course i could be wrong!)

dick suggested: we have a regular language R and a valuation v of r in R
such tbat v(r) must have that correct truth value. what this means is that
if we have a context free language C and a correct valuation v', there
must be a correspondence, corr, such that corr(r) = c and v(r) = v'(c).
a correspondence is possible simply because C is countable. however an
independent semantic equivalence seems impossible because no such
independent corr that perserves semantics seems possible.  here's why:
consider the language of expressions as a simplification. the semantic
ingredients of an expression are an operator and two operands. given an
expression with n operators there are exactly 2n operands that must be
unambiguously associated with these operators. think about how the pumping
lemma fails for pre-fix expressions, for an n-state fsm we consider a
valid expression beginning with more than n ops, and simply pump the ops
to get the wrong number of operands (same for post and paren infix). i
claim the no-paren's infix language is a false hope and gets the n to 2n
ratio right only because an operator is adjacent to both it's operands in
the chain.  we can't do much better than this (else pumping lemma) and the
only reg expressions that get the ratio right are inherently ambiguous. in
short regular expressions are bad at counting and when they do count they
are ambiguous and without the right "semantic ingredients" we can't
define an interpretation. so no such corr exists. sorry if i ran on a bit
...



On Mon, 22 Jul 2002, Steve Stevenson wrote:

>
>  > [OK I understand the question now: it's about the complexity of the
>  > interpretation function tied to the (necessarily regular) grammar.
>
> I understood Dick Crouch to say it was not regular, it's context free.
>
>  > When Thomas asked about a "recursive function," I interpreted "recursive"
>  > as "computable."  Not the same thing at all here.]
>  >
>  > I have a perhaps naive question related to Crouch's posting: When computer
>  > hardware interprets assembly language, in what sense is it or is it not
>  > interpreting according to regular grammar rules?  Clearly it may use
>  > nonconstant space, so it cannot be modeled by a DFA.
>
> Steve:
>
> I think you're mixing apples and oranges. In a naive ol' propositional
> faithful (my term), space is fixed at compile/assemble time. See below
> example
>
> Computers do not interpret assembler language, it interprets bit
> strings that are generated by machine languages. For example, 32 bits
> at a time. The program in memory is interpreted by the following
> procedure known as the fetch-execute cycle operating on (as an
> example) a word that looks like this:
>
> 	 bits 0-10   operation code
> 	 bits 11-15  register 1
> 	 bits 16-28  displacement
> 	 bits 29-31  register 2
>
> Have to know about the program counter (PC). PC is memory address of
> next instruction in memory. Let MEM(x) be memory location indexed by
> x. Assemblers require that I name my memory locations so that it can
> setup step 4 below.
>
> 1. Get contents of MEM(PC)
> 2. Decode word into the parts
> 3. Is operation code legit? No, abort
> 4. Fetch MEM(displacement+contents(register2)) to special register,
>    say RM
> 5. Do ooperation represented by operation code (say +) on register1
>    and RM.
>
> Whoever writes the assembler program has to understand how to convert
> the boolean expression into a program (most compilers do). Suppose
> AND, OR, NOT all do the obvious and VAR n "automagically" (as the
> engineers say) figures out where to put 'n' in memory for step 4. Then A \/ B /\
> C would look like (LOAD puts an arg into its register)
>
>   VAR A
>   VAR B
>   VAR C
>   LOAD r1,B
>   AND  r1,C
>   OR   r1,A
>
> The magic is that the machine implements the those operations in
> accord to what you think they should do---pretty close, anyway. Let
> 'true' is all bits on, 'false' is all off. The hardware AND treats the
> 32 bits as a vector and vector wise does the obvious result. So
>
>    AND r1,C
>
> ands the value of C from the magic register and the contents of r1,
> bit by bit such that ith bit is on if both r1_i and RM_i are
> on. Similarly, OR and NOT.
>
> I think we can say this is regular. The evaluation to turn the ol'
> propositional faithful (using my term) into machine language is
> clearly not because it must recursively walk the tree that is the
> syntactic parse.
>
> steve
>
>
>
>  > Can rules allow nonregular computations?  In a rule such as
>  >
>  > WFF -> ~ WFF     [[WFF_0]] := 1 - [[WFF_1]]
>  >
>  > clearly the common type of WFF_0 and WFF_1 semantic values is Boolean.
>  > Could they have another type, perhaps structured?  I'm not as up on this
>  > subject as I should be.  Thanks.
>
> Got an example in mind?
>
> steve
>





More information about the FOM mailing list