## Stevo’s Forcing Class Fall 2012 – Class 14

(This is the fourteenth lecture in Stevo Todorcevic’s Forcing class, held in the fall of 2012. You can find the thirteenth lecture here. Quotes by Stevo are in dark blue; some are deep, some are funny, some are paraphrased so use your judgement. As always I appreciate any type of feedback, including reporting typos, in the comments below.)

## Stevo’s Forcing Class Fall 2012 – Class 13

(This is the thirteenth lecture in Stevo Todorcevic’s Forcing class, held in the fall of 2012. You can find the twelfth lecture here. Quotes by Stevo are in dark blue; some are deep, some are funny, some are paraphrased so use your judgement. As always I appreciate any type of feedback, including reporting typos, in the comments below.)

## Hindman’s Theorem write-up

It came to my attention that Leo Goldmakher had written up notes for a lecture I gave in August 2011 on the proof of Hindman’s Theorem via ultrafilters. The notes are quite nice so I thought I would share them.

Here is a link to the notes (pdf) and here is Leo’s website.

The lecture I gave follows the papers:

• “An Algebraic Proof of van der Waerden’s Theorem” by Vitaly Bergelson, Hillel Furstenburg, Neil Hindman and Yitzhak Katznelson. (L’enseignement Mathematique, t. 35, 1989, p. 209-215)
• Ultrafilters: Some Old and some New Results” (pdf) by W.W. Comfort. (Bulletin of the AMS, Volume 83, Number 4, July 1977)

## Facts about the Urysohn Space – Some useful, some cool

(This is almost verbatim the talk I gave recently (Feb 23, 2012) at the Toronto Student Set Theory and Topology Seminar. I will be giving this talk again on April 5, 2012)

I have been working on a problem involving the Urysohn space recently, and I figured that I should fill people in with the basic facts and techniques involved in this space. I will give some useful facts, a key technique and 3 cool facts. First, the definition!

Definition: A metric space $U$ has the Urysohn property if

• $U$ is complete and separable
• $U$ contains every separable metric space as an isometric copy.
• $U$ is ultrahomogeneous in the sense that if $A,B$ are finite, isometric subspaces of $U$ then there is an automorphism of $U$ that takes $A$ to $B$.

You might already know a space that satisfies the first two properties – The Hilbert cube $[0,1]^\omega$ or $C[0,1]$ the continuous functions from $[0,1]$ to $[0,1]$. However, these spaces are not ultrahomogeneous. Should a Urysohn space even exist? It does, but the construction isn’t particularly illuminating so I will skip it.

## Reading the Dictionary

I have a confession to make: I am a bibliophile. Reading, owning, perusing, lending, alphabetizing and buying books are all things that make me happy. High on my list are hardcover graphic novels and quality dictionaries. One of the skills you learn quickly while reading a dictionary (so I hear) is how to look up words. Of course the words in a dictionary are laid out in a very orderly fashion; first the ‘A’s then the ‘B’s, etc.. This order turns out to be a useful example of an interesting linear order.

Example: Consider $\{a,b,c\}\times\{a,b\}$ with the dictionary ordering. We get $aa < ab < ba < bb < ca < cb$.

In general to get a dictionary ordering on $A\times B$ out of two linear orders $A,B$ we do the following:

1. Compare first elements. If they are the different, use the ordering on A.
2. If the first coordinates are different, compare the second coordinates. If the second coordinates are different, use the ordering on $B$. If the second coordinates are the same, the elements you are comparing are the same (as they have the same first and second coordinates).

You can extend this process if you want and compare third, fourth or fifth coordinates if you start with three, four or five linear orders. Of course this is just saying something you already know; I don’t need to tell you how to figure out whether ‘oscillate’ comes before ‘ossifrage‘.

Example: Now my fellow sesquipedalians might be interested in the following linear order: Let $D = \{*, a,b,c, \ldots, z\}$ where $* < a < b < \ldots < z$ and $*$ stands for a blank space. Now consider $D^{189819}$ with the dictionary ordering. This will contain every English word both technical and non-technical. Granted it will also contain silly non-words like: “this*word*asserts*that*it*is*a*silly*word”.

## Creeping Along

In my ongoing love affair with compactness I am constantly revisiting a particular proof of the Heine-Borel theorem, a characterization of compactness in $\mathbb{R}$. There are two proofs that I know of: the standard “subdivision” proof and the “creeping along” proof. I am going to focus on the creeping along proof.

Heine-Borel Theorem. A subset $A \subseteq \mathbb{R}$ is compact if and only if it is closed and bounded.

To do some creeping we need to collect some useful facts.

Fact 1. A subset $\mathcal{A} \subseteq \mathbb{R}$ is bounded if and only if is contained in some closed interval $[a,b]$

Fact 2. The set $\mathbb{R}$ is complete (as a linear order) because every non-empty set $A \subseteq \mathbb{R}$ with an upper bound has a least upper bound, called $\sup A$.

Fact 3. Closed subsets of compact subsets of $\mathbb{R}$ are in fact themselves compact. With fact 1 this means that it is enough to show that closed and bounded intervals in $\mathbb{R}$ are compact. (In general closed subsets of compact $T_2$ spaces are compact.)

So now let us creep:

## Helly’s Theorem (2/2)

Last week we looked at the concepts of a collection of sets being n-linked or having the finite intersection property. The key theorem was Helly’s theorem which says:

Helly’s Theorem: If a (countable) family of closed convex sets (at least one of which is bounded) in the plane are 3-linked, then they have a point in common, as they have the FIP.

Now I will look at some of the generalizations that Alexander Soifer, author of “The Mathematical Coloring Book”, makes in Chapter 28 of that book. More than pure generalizations they are the combination of Ramsey theory and Helly’s Theorem

## My first entry! “Helly’s Theorem”

I love compactness. I really do. It turns infinite things into (almost) finite things. I could gush about how great it is, but instead let me tell you about one problem where compact sets act as the delimiter.

Here is one way to characterize compactness:

A space X is compact if and only if any family of closed sets with the Finite Intersection Property (FIP) has a common point.

[Remember that a collection has the FIP if every finite subcollection has a common point (i.e. has non-empty intersection).]

This has a pretty clear connection to filters, as filters are collections of sets with the FIP (and the intersection is in the filter!) and closed under supersets. One example of a filter is the collection of all subsets of the real line that contain a closed interval around 0.

A closely related notion is that of being 2-linked. A collection A is 2-linked if any two sets in A have non-empty intersection. For example the collection of real intervals $\{(n,n+2): n \in \mathbb{Z}\}$ is 2-linked. Another example is the set of sides of a polygon triangle. (Why not a square?)

Then of course we can talk about being 3-linked which means that any 3 sets have non-empty intersection (we will now say that this is called ‘meeting’). Obviously, $\{(n,n+2): n \in \mathbb{Z}\}$ is 2-linked, but not 3-linked. (edit: Yeah, so not only is this not obvious, but it is not true! I address this here.)

Then we could go on to define n-linked for an arbitrary natural number n.

Question 1: How is the FIP related to being n-linked?
Question 2: Can you find, for each n, an example of a collection that is n-linked but not n+1 linked?
Question 3: How is n-linkedness related to the dimension of the real line?

I’ll get to these later. But you should think about them. 1 and 2 are not hard. 3 takes some thought, but just try to come up with a conjecture.