I have a confession to make: I am a bibliophile. Reading, owning, perusing, lending, alphabetizing and buying books are all things that make me happy. High on my list are hardcover graphic novels and quality dictionaries. One of the skills you learn quickly while reading a dictionary (so I hear) is how to look up words. Of course the words in a dictionary are laid out in a very orderly fashion; first the ‘A’s then the ‘B’s, etc.. This order turns out to be a useful example of an interesting linear order.
Example: Consider with the dictionary ordering. We get .
In general to get a dictionary ordering on out of two linear orders we do the following:
- Compare first elements. If they are the different, use the ordering on A.
- If the first coordinates are different, compare the second coordinates. If the second coordinates are different, use the ordering on . If the second coordinates are the same, the elements you are comparing are the same (as they have the same first and second coordinates).
You can extend this process if you want and compare third, fourth or fifth coordinates if you start with three, four or five linear orders. Of course this is just saying something you already know; I don’t need to tell you how to figure out whether ‘oscillate’ comes before ‘ossifrage‘.
Example: Now my fellow sesquipedalians might be interested in the following linear order: Let where and stands for a blank space. Now consider with the dictionary ordering. This will contain every English word both technical and non-technical. Granted it will also contain silly non-words like: “this*word*asserts*that*it*is*a*silly*word”.
Continue reading Reading the Dictionary
In my ongoing love affair with compactness I am constantly revisiting a particular proof of the Heine-Borel theorem, a characterization of compactness in . There are two proofs that I know of: the standard “subdivision” proof and the “creeping along” proof. I am going to focus on the creeping along proof.
Heine-Borel Theorem. A subset is compact if and only if it is closed and bounded.
To do some creeping we need to collect some useful facts.
Fact 1. A subset is bounded if and only if is contained in some closed interval
Fact 2. The set is complete (as a linear order) because every non-empty set with an upper bound has a least upper bound, called .
Fact 3. Closed subsets of compact subsets of are in fact themselves compact. With fact 1 this means that it is enough to show that closed and bounded intervals in are compact. (In general closed subsets of compact spaces are compact.)
So now let us creep:
Continue reading Creeping Along
Last week we looked at the concepts of a collection of sets being n-linked or having the finite intersection property. The key theorem was Helly’s theorem which says:
Helly’s Theorem: If a (countable) family of closed convex sets (at least one of which is bounded) in the plane are 3-linked, then they have a point in common, as they have the FIP.
Now I will look at some of the generalizations that Alexander Soifer, author of “The Mathematical Coloring Book”, makes in Chapter 28 of that book. More than pure generalizations they are the combination of Ramsey theory and Helly’s Theorem
Continue reading Helly’s Theorem (2/2)
I love compactness. I really do. It turns infinite things into (almost) finite things. I could gush about how great it is, but instead let me tell you about one problem where compact sets act as the delimiter.
Here is one way to characterize compactness:
A space X is compact if and only if any family of closed sets with the Finite Intersection Property (FIP) has a common point.
[Remember that a collection has the FIP if every finite subcollection has a common point (i.e. has non-empty intersection).]
This has a pretty clear connection to filters, as filters are collections of sets with the FIP (and the intersection is in the filter!) and closed under supersets. One example of a filter is the collection of all subsets of the real line that contain a closed interval around 0.
A closely related notion is that of being 2-linked. A collection A is 2-linked if any two sets in A have non-empty intersection. For example the collection of real intervals is 2-linked. Another example is the set of sides of a
polygon triangle. (Why not a square?)
Then of course we can talk about being 3-linked which means that any 3 sets have non-empty intersection (we will now say that this is called ‘meeting’). Obviously, is 2-linked, but not 3-linked. (edit: Yeah, so not only is this not obvious, but it is not true! I address this here.)
Then we could go on to define n-linked for an arbitrary natural number n.
Question 1: How is the FIP related to being n-linked?
Question 2: Can you find, for each n, an example of a collection that is n-linked but not n+1 linked?
Question 3: How is n-linkedness related to the dimension of the real line?
I’ll get to these later. But you should think about them. 1 and 2 are not hard. 3 takes some thought, but just try to come up with a conjecture.
Continue reading My first entry! “Helly’s Theorem”