[FOM] Motivating the concept of a generic filter

Timothy Y. Chow tchow at alum.mit.edu
Tue Oct 9 13:51:19 EDT 2007


I'm currently trying to improve my informal "forcing for dummies" article, 
with a view to publishing it as an expository article.  One major change 
that I'm likely to make is to discuss Boolean-valued models, since they 
seem quite intuitive to me.

There remains one sticking point, which is that I can't seem to find a way 
to motivate the concept of a *generic* filter in a satisfactory manner.  
By "a satisfactory manner" I mean, roughly speaking, that someone not in 
possession of the concept could see why one would be led to define it.

In some texts, the approach is to discuss generic filters and Martin's 
axiom in the context of infinitary combinatorics, long *before* any 
mention of forcing.  Then by the time you reach forcing, you're supposed 
to be comfortable with generic filters already.  I don't find this to work 
very well as "motivation."  Even if you've seen a generic filter before, 
why would you define p ||- phi in terms of generic filters?  This 
definition seems to be pulled out of a hat.

In Cohen's book, he takes the approach of starting with a minimal model 
and seeing what it would take to add the "missing" subset you want, while 
otherwise keeping things as similar to the minimal model as possible.  
Thus one restricts to transitive epsilon-models, and one wants to keep the 
same ordinals.  He shows that a naive attempt to adjoin a missing subset 
fails, so that one is led to consider more carefully "all conditions at 
once" in some sense, to make sure once chooses a set that mesh together 
properly.  So far so good.  At this point, however, he says that the chief 
point is to consider generic elements, with no "special" properties that 
are the source of trouble in the naive attempt, and see what is "forced" 
to hold.  This still seems like a leap to me.  What would make you think 
that the seemingly hopelessly vague concept of a "generic element" makes 
any sense and would solve your problems?

The Boolean-valued model approach works nicely up to a point.  We don't 
know which statements will hold and which ones won't hold in our new 
model, so we take them all at once and track their interdependencies using 
a complete Boolean algebra B.  If M is a model of ZFC, it's maybe not 
immediately apparent why we need to choose B to be complete in M, rather 
than complete in V, but as soon as one tries to prove that M^(B) is a 
Boolean-valued model of ZFC one quickly sees the need for the sups and 
infs of subsets of B to be in M.  Modding out by an ultrafilter to get an 
actual model rather than a Boolean-valued model is also pretty natural.

But again, why generic filters?  Generic filters aren't needed to get new 
models of ZFC; if M is a transitive epsilon-model and B is any Boolean 
algebra in M that M thinks is complete, and U is any ultrafilter of B, 
then M^(B)/U is a model of ZFC.  Generic filters are needed if you want 
M^(B)/U to have some "nice" properties, but why would you think that you 
need those nice properties?  Indeed, in Bell's book, the independence of 
the continuum hypothesis is proved before generic filters are discussed.  
He takes the relevant poset, embeds it in a complete Boolean algebra in a 
natural way, and shows that this works.  It seems a bit like magic; and 
even if you don't blink at this proof, it's not at all clear why you would 
think that you could then dispense with the Boolean algebra and work with 
*generic* filters in an arbitrary poset.

So, any suggestions for getting past this sticking point?

Tim


More information about the FOM mailing list