Bootcamp 6 – Ramsey DocCourse Prague 2016

The following notes are from the Ramsey DocCourse in Prague 2016. The notes are taken by me and I have edited them. In the process I may have introduced some errors; email me or comment below and I will happily fix them.

Title: Bootcamp (6 of 6)

Lecturer: Jaroslav Nesetril

Date: Monday October 3, 2016.

Main Topics: Other applications of the “product argument”, Chain-Ramsey for Posets, Proof of edge-Ramsey for Graphs, Proof of Hales-Jewett

Definitions: Structural pigeonhole principle, Poset, Graph product, Combinatorial line

Bootcamp 1 – Bootcamp 2 – Bootcamp 3Bootcamp 4Bootcamp 5 – Bootcamp 6


From last time

This lecture continues from Bootcamp 4 which introduced the “product argument” as seen in the proof of Hilbert’s Theorem. In this lecture we have in mind three direct applications of the product argument. For this lecture it is very important that you understand the proof of Hilbert’s theorem!

  1. Chain-Ramsey for partial orders.
  2. Edge-Ramsey for graphs.
  3. Hales-Jewett Theorem.

The first two arguments will go as follows:

Overview.

  1. Use an embedding theorem which says that every structure is contained in a product structure.
  2. Without loss of generality we may construct a product structure.
  3. The special case of only one factor is usually a classical theorem.
  4. At the inductive stage, use the product argument to create the newest factor.
  5. Depending on the structure, care must be taken to ensure that the “cross edges” between the old structure and the new factor remain monochromatic.

The Product Pigeonhole Principle

Definition. Let \mathcal{C} be a class such that \forall B \in \mathcal{C}, k \in \mathbb{N}, \exists C \in \mathcal{C} such that C \longrightarrow (B)_k^1 . We say that \mathcal{C} satisfies the pigeonhole principle (for \mathcal{C} ).

Let us first state a general version of the Pigeonhole Principle which we will find useful. The proof is the product argument.

Theorem (Product Pigeonhole). Let n_1, \ldots, n_d, k \in \mathbb{N} . There are finite sets C_i such that for all partitions C_1 \times \ldots \times C_d = \mathcal{A}_1 \cup \ldots \cup \mathcal{A}_k , there are B_i \subset C_i of cardinality n_i , and an i_0 such that B_1 \times \ldots \times B_d \subseteq \mathcal{A}_{i_0} .

Finite sets hide some of what is going on, so let us state this for classes of finite structures. The first theorem will be a corollary of the second.

Theorem (Product-Pigeonhole for a class of finite structures). Let \mathcal{C} be a class of finite structures with the vertex-Ramsey property (i.e. the Pigeonhole principle). Let B_1, \ldots, B_d \in \mathcal{C}, k \in \mathbb{N} . There are C_i \in \mathcal{C} such that for all partitions C_1 \times \ldots \times C_d = \mathcal{A}_1 \cup \ldots \cup \mathcal{A}_k , there are isomorphic copies B^\prime_i \cong B_i in C_i , and an i_0 such that B^\prime_1 \times \ldots \times B^\prime_d \subseteq A_{i_0} .

We denote the conclusion ā؉ C_i \longrightarrow (ā؉ B_i)_k^1 .

Proof. This is by induction on d , the number of factors. Suppose this theorem is true for d factors (all substructures, all colourings) and we will show it for d+1 factors.

Fix structures B_1, \ldots, B_d \in \mathcal{C} , the “extra” structure B_{d+1} and k \in \mathbb{N} .

By the pigeonhole principle for \mathcal{C} , there is a C_{d+1} such that C_{d+1} \longrightarrow (B_{d+1})_k^1 . (We think of this C_{d+1} as a “small” factor.)

Let k_1 = k^{\vert C_{d+1} \vert} .

By the inductive hypothesis, there are C_i \in \mathcal{C} such that ā؉_{i \leq d} C_i \longrightarrow (ā؉_{i\leq d} B_i)_{k_1}^1 . (We think of ā؉_{i \leq d} C_i as “big”.)

Let \chi : ā؉_{i \leq d+1} C_i \rightarrow k be an arbitrary colouring. We give relating colourings on the big part ā؉_{i \leq d} C_i , then the small part C_{d+1} .

Consider the colouring \chi_{big} : ā؉_{i \leq d} C_i \rightarrow k_1 . For M \in ā؉_{i \leq d} C_i , \chi_{big}(M) = \langle\chi(M,x) : x \in C_{d+1} \rangle .

Find B^\prime_i \cong B_i in C_i (for 1 \leq i \leq d ) such that B^\prime_1 \times \ldots \times B^\prime_d is \chi_{big} -monochromatic.

We now define the second colouring. Let (b_1, \ldots, b_d) \in B^\prime_1 \times \ldots \times B^\prime_d be fixed, (any point will work because of the previous colouring). Define \chi_{small} : C_{d+1} \rightarrow k by \chi_{small}(x) = \chi(b_1, \ldots, b_d, x) .

There is a B_{d+1}^\prime \cong B_{d+1} in C_{d+1} that is $\chi_{small}-monochromatic.

Exercise. Convince yourself that B^\prime_1 \times \ldots \times B^\prime_d \times B^\prime_{d+1} is \chi -monochromatic.

Chain-Ramsey for Partial Orders

Definition. We refer to a set with a partial order as a poset. A set C \subset \mathbb{P} , a poset, is a chain if C is a linear order with the inherited order.

Our proof of Hilbert’s theorem actually proves a Ramsey result about posets.

Theorem (Chain-Ramsey for Posets). For k \in \mathbb{N} , for A a chain, B a finite poset, then there is a poset C such that C \longrightarrow (B)_k^A .

We will use the following embedding theorem, which will allow us to assume without loss of generality that B is a product of linear orders.

Theorem (Dushnik-Miller). For every poset P , there are linear orders L_1, \ldots, L_t such that P \longrightarrow ā؉_{i=1}^t L_i . The minimal such t is called the Dushnik-Miller dimension of P .
Definition. The product \mathbb{L} \times \mathbb{M} of two linear orders \mathbb{L} = (L, \leq_L), \mathbb{M} (M, \leq_M) is the set L\times M together with the coordinate-wise order (a_1,b_1) \leq (a_2,b_2) iff a_1 \leq_L a_2 and b_1 \leq_M b_2 .

Proof of Chain-Ramsey for Posets. As an exercise, use the proof of the Product-Pigeonhole for a class of finite structures to prove the desired result.

There is only one step that will be specific to posets. If you can’t find it, then look to our later proof of the graph type lemma.

Edge-Ramsey for Graphs

Theorem (Edge-Ramsey for Graphs). For every graph B , \forall k \in \mathbb{N} there is a graph C such that C \longrightarrow (B)_2^\text{edge} .

Embedding theorem and graph products

Our embedding theorem will be the following:

Theorem. For every graph G , there is a finite index set I and n(i) \in \mathbb{N} , such that G embeds into ā؉_{i \in I} K_{n(i)} .

Definition. The product of two graphs G_1 = (V_1, E_1), G_2 = (V_2, E_2) is the graph G_1 \times G_2 with vertex set V_1 \times V_2 and edgesĀ  \{(u,u^\prime), (v,v^\prime)\} where \{u,v\} \in E_1 , \{u^\prime,v^\prime\} \in E_2 .

Define \text{dim}(G) = \min(\vert I \vert) such that I is given by the above theorem.

We give an example of graph product below. In regards to the dimension of a graph, here are two nice facts (which are unrelated to our theorem at hand).

Fact. \text{dim}(K_n^d) = d .

Of course the upperbound is obvious, but the lower bound is interesting!

Fact (LovĆ”sz-NeÅ”etřil-Pultr 1980). The dimension of 2d-1 disjoint edges is d .

This is the first use of the algebraic method in combinatorics.

Proof of embedding theorem.

Fix G=(V,E) with \vert V \vert = n . Let \Phi be the collection of all graph homomorphisms f: G \longrightarrow K_n .

Define the map \phi: V \longrightarrow K_n^{\vert \Phi \vert} , where the f -coordinate is f(x) , for f \in \Phi .

  1. Claim. \phi preserves edges.
  2. Claim. \phi preserves non-edges.
  3. Claim. \phi is injective.

Conclude that \phi embeds G into K_n^\Phi .

Taking care of cross-edges

The proof will be comparable to the chain-Ramsey for posets, but will introduce an additional complication. In the poset case, given \overline{a}, \overline{b} \in B^\prime_1 \times \ldots \times B^\prime_N and a,b \in B^\prime_{N+1} there is one way to get a pair of points in B^\prime_1 \times \ldots \times B^\prime_N \times B^\prime_{N+1} that respect the order \overline{a} \leq \overline{b} and a \leq b . Namely (\overline{a}, a) \leq (\overline{b}, b) . This was possible because the poset relation is antisymmetric.

In the graph case, the edge relation is symmetric, so there won’t be a canonical way to get elements of the large product. So we have to work harder and take this into account when arguing.

A typical example is the product of two edges. Let G_1 = (\{a,b\}, E_1) with the edge \{a,b\} . Let G_2 = (\{c,d\}, E_2) with the edge \{c,d\} . In this case the product G_1 \times G_2 is the graph on the four points (a,c),(a,d),(b,c),(b,d) with the two edges \{(a,c),(b,d)\} and \{(a,d),(b,c)\} .[Picture]

The issue is that every pair of edges gives rise to two edges in the product. We will use \{1,-1\} -types to figure out which one to take.

Definition. For 1 \leq i \leq d let \leq_i be a linear order on the vertices of K_{n(i)} . For \{\overline{x},\overline{y}\} an edge in ā؉_{i=1}^d K_{n(i)} , with x lexicographically less than y , we say the type t(\overline{x}, \overline{y}) = (a_1, \ldots, a_t) where a_i = 1 if x_i <_i y_i and a_i = -1 if x_i \geq_i y_i .

Exercise.

  1. Compute the types of the “typical example” that appears before the definition. Do the same for the product of three edges.
  2. Every type starts with a 1 .
  3. There are at most 2^{d-1} many types (in a product of dimension d ).
  4. Every product of K_{n(i)} of dimension d contains all possible types.
  5. Every product of K_2 of dimension d contains every type exactly once.
  6. Use types to show that there is an edge colouring of K_N \times K_N that does not admit a monochromatic K_2 \times K_2 . Conclude that the desired Ramsey witness might have higher dimension than our starting graph.

Embeddings

It is worth examining how embeddings work in big products, because they can be a little counterintuitive.

Exercise. Show that K_3 does not embed into K_3 \times K_2 .

The above exercise shows that you might not have A embeds into A \times K_n . Fortunately, this can be fixed.

Lemma. Let A be a graph with n vertices, and suppose n \leq N . Then A embeds into A \times K_N .
Proof. Assume K_N is defined on the set \{1, \ldots, N\} . Let \phi : A \longtrightarrow A \times K_N be defined by the diagonal map \phi(a_i) = (a_i, i) . Note that this is an embedding (Exercise).
Corollary. Let A be a graph with n vertices, and suppose n \leq N . Then A embeds into A \times K_N \ldots K_N .

A type lemma

The exercise 4 above points out that we cannot hope to only have one type of edge, so instead we need to make sure that all types receive the same colour. We do this in two stages by first making sure that all edges of the same type have the same colour; that is the content of the next lemma.

Lemma. Fix a finite set I , n(i) \in \mathbb{i} and a type t of dimension \vert I \vert . There are N(i) \in \mathbb{N} such that for every type-t -edge partition E_t (ā؉_{i \in I} K_{N(i)}) = \mathcal{A}_1 \cup \ldots \cup \mathcal{A}_k , there is a colour c(t) and a subgraph ā؉_{i \in I} K_{n(i)} all of whose edges of type t are in \mathcal{A}_{c(t)} .
Corollary. There is a large enough N such that you can always find a ā؉_{i \in I} K_N such that each edge type t is monochromatic in colour c(t) (but different types might have different colours).

Note. In this lemma the dimension does not increase.

Proof of corollary. There are at most 2^{\vert I \vert - 1} many different types, so we just apply the lemma that many times. Take N = \max_{i \in I} N(i) .

Proof of lemma. This will be a product argument very similar to the poset argument, inducting by dimension. Since we are fixing the type ahead of time, there is no difficulty with the “cross-edges”.

[Dimension d=1 ] This is the classical Ramsey Theorem.

[d \Rightarrow d+1 ] Fix structures K_{n(1)}, \ldots, K_{n(d)} , the “extra” structure K_{n(d+1)} and k \in \mathbb{N} .

By the d=1 case, there is a K_{N(d+1)} such that K_{N(d+1)} \longrightarrow (K_{n(d+1)})_k^\text{edge} .

Let k_1 = k^{\vert K_{N(d+1)} \vert} .

By the inductive hypothesis, there are K_{N(1)}, \ldots, K_{N(d)} such that ā؉_{i \leq d} K_{N(i)} \longrightarrow (ā؉_{i\leq d} K_{n(i)})_{k_1}^1 .

Let \chi : E_t (ā؉_{i \leq d+1} C_i) \rightarrow k be an arbitrary type-t -edge colouring. We give relating colourings on the big part ā؉_{i \leq d} K_{N(i)} , then the small part K_{N(d+1)} .

Consider the colouring \chi_{big} : E_{t\uprighthook d}(ā؉_{i \leq d} K_{N(i)}) \rightarrow k_1 . For M \in E_{t\uprighthook d} (ā؉_{i \leq d} K_{N(i)}) , define \chi_{big}(M) = \langle\chi(M \times x) : x \in E(K_{N(d+1)}) \text{ and we only take the one edge from } M \times x \text{ of type }t \rangle .

Key step. This previous step is the only step that is different from the poset case.

Find N(i) (for 1 \leq i \leq d ) such that K_{N(1)} \times \ldots \times K_{N(d)} is \chi_{big} -monochromatic.

We now define the second colouring. Let M \in E_{t\uprighthook d}(ā؉_{i \leq d} K_{n(i)}) be fixed. Define \chi_{small} : K_{N(d+1)} \rightarrow k by \chi_{small}(x) = \chi(M \times x) where we only take the one edge from M \times x of type t .

There is a K_{N(d+1)} that is  \chi_{small}-monochromatic.

Thus K_{N(1)} \times \ldots \times K_{N(d)} \times K_{N(d+1)} is our desired structure, and E_t (K_{n(1)} \times \ldots \times K_{n(d)} \times K_{n(d+1)}) are \chi -monochromatic.

The proof of edge-Ramsey

Overview of proof.

  1. Because of our embedding theorem, without loss of generality we may assume our graph is a product of K_{n(i)} .
  2. Because of our corollary, extend each n(i) to a large enough N (so each edge type individually is monochromatic).
  3. Use Hilbert’s Theorem (on the index set) to extend the dimension to D (to make sure all edge types have the same colour).

Proof of edge-Ramsey. Without loss of generality we may instead prove the following: For all ā؉_{i \leq d} K_{n(i)} , \forall k \in \mathbb{N} there is a graph ā؉_{j \in J} K_{N(j)} such that ā؉_{j \in J} K_{N(j)} \longrightarrow (ā؉_{i \leq d} K_{n(i)})_k^\text{edge} .

Let n \geq \max_{i \leq d} n(i) .

We apply Hilbert’s Theorem to the index set. That is, there is a sufficiently large D so that any partition of \mathcal{P}(D) into k parts yields a lattice A_0, A_1, \ldots, A_d mutually disjoint and nonempty, such that A_0 \cup \bigcup_{j\in J} A_j is always in the same part (regardless of the set J ).

Extend each n to a suitably large N (as in the corollary for the index set D , applied for all 2^{D-1} many types).

Claim. ā؉_{1 \leq j \leq D} K_N is the desired Ramsey witness.

Fix an edge colouring of ā؉_{1 \leq j \leq D} K_N . From the choice of N we can stabilize the colouring on each edge type, and get the graph ā؉_{1 \leq j \leq D} K_n .

From the choice of D , consider the following k -colouring of \mathcal{P}(D) . For A \subseteq D consider the type t_A that is 1 on the points in A and -1 off of it. Assign A the colour that t_A receives above.

Find A_0, A_1, \ldots, A_d as in Hilbert’s theorem. Note that (as in the embedding section) each ā؉_{i \in A_0} K_n contains a copy of K_n . So ā؉_{1 \leq i \leq d} K_n embeds into ā؉_{i \in A_0} K_n \times \ldots \times ā؉_{i \in A_d} K_n \times ā؉_{i \notin A_0, \ldots, A_d} K_n = ā؉_{i \in D} K_n .

We claim that our copy of ā؉_{1 \leq i \leq d} K_n is edge-monochromatic. We already know that each type of edge is monochromatic, so it will be enough to show that different edge types get the same colour.

We note that each type t of this copy can be described using an A_0 \cup \bigcup_{j\in J(t)} A_j , for a J(t) \subseteq \{1, \ldots, d\} . Also note that each of these types will be of the same colour. Also remark that A_0 must be in each of these unions, and this corresponds to the fact that each type starts with a 1 .

Hales-Jewett

Definition. For a fixed alphabet A = \{a_1, \ldots, a_r\} and a variable x \notin A a constant word (of length N ) is an element of A^N . A variable word

w(x) is an element of (A \cup \{x\})^N , where x shows up at least once. The (constant) word w(a_i) is the word w(x) with all x replaced by a_i . The positions of the variable x are refered to as the moving part.

A combinatorial line L = \{w(a) : a \in A\} for a fixed variable word w(x) .

Example. Consider the alphabet A = \{a,b,c\} and N=2 . There are seven variable words: ax,bx,cx,xa,xb,xc,xx which correspond to the seven combinatorial lines: \{aa,ab,ac\}, \{ba,bb,bc\}, \{aa,ba,ca\}, \{ab,bb,cb\}, \{ac,bc,cc\} and \{aa,bb,cc\} . The moving part of the ax is the second coordinate.

Theorem (Hales-Jewett, 1964). Fix a finite alphabet A , with \vert A\vert=r . For all r,k \in \mathbb{N} , there is an N \in \mathbb{N} such that for all partitions A^N = \mathcal{A}_1 \cup \ldots \cup \mathcal{A}_k , there is an i_0 and a combinatorial line L in A^N such that L \subseteq \mathcal{A}_{i_0} .
Notation. We abbreviate this theorem above as HJ(1, r, k ), where r is the length of the alphabet, k is the number of colours and 1 refers to the dimension of the combinatorial line (more on that in a moment). Sometimes we also abuse notation and use HJ(1, r, k ) to refer to the least number N(1,r,k) that works for the theorem.

In order to prove the Hales-Jewett Theorem for all sizes of alphabet, we will actually need to induct with a stronger, higher-dimension theorem. First a simple example.

Fact. HJ(1,2,k ) is true for all k .

Let A = \{a,b\} . For k colours, we claim that N=k+1 suffices.

Take the words a^i b^{N-i} , for 0 \leq i \leq N . If they are k -coloured, then two of the words (given by i<k ) must be the same colour. So the combinatorial line given by the word w(x) = a^i x^{k-i} b^{N-k} is monochromatic.

We now introduce the higher-dimensional version of Hales-Jewett.

Definition. For a fixed alphabet A = \{a_1, \ldots, a_r\} and variables x_1, \ldots, x_d \notin A a multi-variable word w(\overline{x}) is an element of (A \cup \{x_1, \ldots, x_d\})^N , where each x_i shows up at least once.

A combinatorial subspace (of dimension d ) is \mathcal{W}_{w(\overline{x})} = \{w(a_{i_1}, \ldots, a_{i_d}) : a_{i_1}, \ldots, a_{i_d} \in A\} for a fixed multi-variable word w(\overline{x}) .

Theorem (Higher dimensional Hales-Jewett). Fix a finite alphabet A , with \vert A\vert=r . For all d,r,k \in \mathbb{N} , there is an N \in \mathbb{N} such that for all partitions A^N = \mathcal{A}_1 \cup \ldots \cup \mathcal{A}_k , there is an i_0 and a combinatorial subspace S of dimension d in A^N such that S \subseteq \mathcal{A}_{i_0} .
Notation. We abbreviate this theorem above as HJ(d, r, k ), where r is the length of the alphabet, k is the number of colours and d refers to the dimension of the combinatorial subspace. Sometimes we also abuse notation and use HJ(d, r, k ) to refer to the least number N(d,r,k) that works for the theorem

The proof of Hales-Jewett

Overview of proof.

  1. Fix r . Observe that trivially HJ(1,r,1 ).
  2. Fix r,k . Show HJ(1,r,k ). (Already done.)
  3. Fix d,r,k . Show HJ(1,r,k ) implies HJ(d,r,k ). (Product argument.)
  4. Fix r,k . Show HJ(1,r+1,k-1 ) (and previous implication) implies HJ(1,r+1,k ). (Fancy version of second obersavation.)
Exercise. Show that these four steps are enough to prove the Hales-Jewett theorem \forall d,r,k .
Proof that for fixed d,r,k that HJ(1,r,k ) implies HJ(d,r,k ).
This is precisely a product argument, as in the proof of Hilbert’s Theorem. Assume we have HJ(1,r,k ) and HJ(d,r,k ), we will show HJ(d+1,r,k ). Take
  1. N_1 to be the number given by \text{HJ}(1, r, k) .
  2. N_2 to be the number given by \text{HJ}(d, r, k^{N_1}) .

We claim that \text{HJ}(d+1, r, k) \leq N := N_1 + N_2. Let  Abe the alphabet with  rletters. Fix a  k-colouring  \xion  A^N$.

We induce a colouring \xi_2: A^{N_2} \longrightarrow k^{N_1} as follows. For w_2 \in A^{N_2} , take the vector of \xi -colours that each w_1 \in A^{N_1} gives to the concatenated word w_1 w_2 .

Definition. The concatenation of two words w \in A^n , amd v \in A^m is the word wv\in A^{n+m} achieved by writing w followed by v . In general this operation is not commutative.

For example ab concatenated with cac is the word abcac .

Concatenate comes from the Latin “con” (with, together) and “caten” (chain). It literally means “chain together”.

Like the poset case, and Hilbert’s theorem, there is no difficulty or ambiguity in this concatenation. This is unlike the case for graphs which we worked around by using types.

Since N_2 was chosen large enough, there is a monochromatic combinatorial subspace \mathcal{W}_{w(\overline{x})} of A^{N_2} of dimension d .

Now induce a colouring \xi_1: A^{N_1} \longrightarrow k as follows. Fix a word w_2 \in \mathcal{W}_{w(\overline{x})} . For w_1 \in A^{N_1} , take the \xi -colour that is assigned to the concatenated word w_1 w_2 . (By choice of \xi_2 , every word w_2 \in \mathcal{W}_{w(\overline{x})} induces the same colouring \xi_1 .)

Since N_1 was chosen large enough, there is a monochromatic combinatorial line \mathcal{W}_{v(x)} of A^{N_1} .

Our desired monochromatic subspace A^N of dimension d+1 is the subspace \mathcal{W}_{v(x)w(\overline{x})} which is achieved by concatenating each element of \mathcal{W}_{v(x)} with each element of \mathcal{W}_{w(\overline{x})} .

Proof that for fixed r,k HJ(1,r+1,k-1 ) implies HJ(1, r+1, k ).

Fix r,k . Our goal is to go up a colour.

We will be assuming that HJ(d,r,k ) for all d (but fixed r ). (That is, for alphabets of length r and using k colours we can find a monochromatic subspace of dimension d ).

We also assume HJ(1,r+1,k-1 ). (That is, for alphabets of length r+1 and using k-1 colours, we can find a monochromatic line.)

Claim. We want HJ(1, r+1, k ). (That is for alphabets of length r+1 and using k colours, we can find a monochromatic line.)

Assume that our alphabet is A = B \sqcup \{a\} where \vert B \vert = r .

Let d = 1 + HJ(1, r+1, k-1) . We will show that HJ(1, r+1, k ) \leq HJ(d,r,k ) =: N.

Fix a k -colouring on the words of A^N . This also colours the words of B^N \subseteq A^N .

There is a monochromatic combinatorial subspace of dimension d in B^N , call it \mathcal{W}_{w(\overline{x})} = \{w(b_1, \ldots, b_d) : b_i \in B,  i\leq d\} . Say that every word here gets colour k_0 .

Now consider the words \mathcal{W}_{w(a, \ldots)} \{w(a, b_2, \ldots, b_d) : b_i \in B, 2 \leq i \leq d \} , which are achieved by substituting the first coordinate of the word with a , and the rest from B . Note that if any single one of these words is assigned the colour k_0 , then we can create a monochromatic combinatorial line in A^N . ( Exercise.)

So now suppose that \mathcal{W}_{w(a, \ldots)} doesn’t use the colour k_0 . So this is a k-1 colouring on the words A^{d-1} . By assumption we can find a monochromatic line L here. This L will also be a monochromatic line in A^N (since the alphabet is not changing).

Let us prove Van der Waerden’s theorem as a corollary.

Theorem (Van der Waerden). For all n,k there is an N = N(n,k) such that for every colouring \xi:N \longrightarrow k there is a monochromatic arithmetic progression of length n .

Proof. Fix a number p > n , let N = HJ(1, p,k). Fix a  k-colouring on  N$.

We describe the numbers from 0 to N-1 in a way that makes Hales-Jewett applicable. We want combinatorial lines to correspond to arithmetic progressions.

For each word (a_0, a_1, \ldots, a_{N-1}) \in p^N associate to it the number a_0 p^0 + a_1 p^1 + \ldots + a_{N-1} p^{N-1} . (We are identifying words with their representation in base p .)

Now a combinatorial line will correspond to an arithmetic progression of length p . (Exercise.)

Example. For an arithmetic progression of length 3 , consider the words in base 3 . Hales-Jewett with tell you how big you need to take the largest power to be. A combinatorial line might look like this (with N = 4 ).

The word is w(x) = a + x3 + x3^2 + b3^3 + x3^4 , and the line is a, a + 3 + 9 + b27 + 81, a + 6 + 18 + b27 + 162 . This can be seen as an arithmetic progression as: (a+ b27) + x(3+9+81) where x = 0,1,2 .

6 thoughts on “Bootcamp 6 – Ramsey DocCourse Prague 2016”

  1. I would say in the application of hilbert lemma, you partitions subset of the index sets into k parts and look for monochromatic lattice. Let N be the number given by hilbert lemma.

    You build product of N copies of complete graph and by the previous lemma any edge coloring gives you a produced of N copies of (smaller) complete graphs such that the coloring depends only on the type. To complete the proof we need to find a product of d complete graphs as a monochromatic subgraph of this.

    Type is a sequence of -1s and 1s of length N and the coloring induces coloring of types. On this you apply hilbert lemma (set corresponds to a sequence where 1s represent elements of the set) and the monochromatic cube will then give you the monochromatic copy.

    I think it is easier to see with d-dimensional hales jewett cube, but with hilbert lemma it is essentially the same.

    Like

Leave a comment