# svg#logo { overflow: visible; }

## distributive lattices

published on 2019-02-11

Lattices show up everywhere in math and logic. They are as fundamental a building block as the set, the function or the category. And like all of the above, they can be used as a foundational element on which to build all the rest of mathematics and logic. So even though it at least as powerful as established concepts like the function or the set, the lattice isn't as well-known. High time for a quick introduction, wouldn't you say?

A lattice is a partially ordered set $(L, \leq)$ such that for every two elements $a, b \in L$ there exists both a unique least upper bound (or join) $a \vee b$ and a unique greatest lower bound (or meet) $a \wedge b$.

The canonical examples of lattices is the collection of subsets of any given set $X$ with the inclusion relation. Indeed, take any two subsets $A, B \subseteq X$, then there exists exactly one subset of $X$ that contains both $A$ and $B$ and is contained in every other subset containing $A$ and $B$. To make this more concrete, let's consider a concrete example where $X$ is the set of all months, $A:={ \text{January}, \text{April}}$ and $B:={\text{December}, \text{April}, \text{July}}$. Then $A \cup B = { \text{January}, \text{April}, \text{July}, \text{December} }$ contains both $A$ and $B$. Further, it's clear that any other set containing $A$ and $B$ must also contain $A \cup B$. It is precisely the smallest set containing them both, or, in lattice terms, their least upper bound. Completely analogously, $A \cap B$ is their greatest lower bound.

Because lattices have such a clear structure, they are easily drawn using Hasse diagrams. In these diagrams, things that are greater with respect to the given relation are placed before lesser elements. Below is the Hasse diagram for a lattice type system that is similar to one you might find in a programming language like Scala. In this example, the objects are types and the relation is given by inheritance. Every type inherits the $\text{Object}$ type, and every type is in turn inherited by $\text{Nothing}$1. The join of two types is now their earliest shared ancestor. The meet can be thought of the first common successor. In this example, and in fact for most type systems, this will be the $\text{Nothing}$ type whenever two types are not already in an inheritance relation with each other. Programming language with multiple inheritance like C++ can have shared successors that are neither of the input types nor the $\text{Nothing}$ type, but it will generally not be unique.

A particularly useful kind of lattice is the distributive lattice. Those have the additional property that for any three elements $a, b, c$ in the lattice, $a \vee (b \wedge c) = (a \vee b) \wedge (b \vee c)$. We say that taking joins distributes over taking meets. Somewhat surprisingly, from this it follows that meets distribute over joins as well. Such an equational definition may look convoluted, but the property really is quite natural. Our favorite lattices, the subsets with inclusion relation, are distributive. Indeed, taking the intersection of a subset with the union of two others produces the same result as taking the union of the intersections of the first set with the others. We can think of distributive lattices of being set-like, in a way. This idea is formalized by Birkhoff's representation theorem, which was proven in 1939.

Logic is one of the places where distributive lattices show up often. Lattices and logic are so intertwined even, that the symbols for the meet and join operators are overloaded with the logic operators for disjunction and conjunction. Take for example the set of (equivalent) formulas on the single propositional variable $p$. They form a very simple distributive lattice for which the meet and join operations behave exactly as you'd expect them to from a classical logic perspective. The formulas of all but the very funkiest of logics form distributed lattices. These special lattices are known as Lindenbaum algebras.

The distributive property is so natural and widespread that it was once thought that all lattices are distributive. Alas, this is not the case. For example, the diamond and pentagram lattices are not distributive. In the diamond, $a \vee (b \wedge c) = a \vee 0 = a$, while $(a \vee b) \wedge (b \vee c) = 1 \wedge 1 = 1$. Can you see why the pentagram is not distributive?

Luckily, these are essentially the only two lattices that are not distributive, in the sense that a lattice is distributive if and only if it does not contain either the diamond or the pentagram as a sublattice. It is reminiscent of Kuratowski's theorem, which states that finite graphs can be drawn without crossing any lines if and only if it does not contain certain subgraphs. Such characterization results are often thought to be among the most beautiful and most useful results in mathematics. They allow us to reframe problems in a different context, connecting distinct fields of mathematics and thereby enabling the use of techniques from one field to prove things about objects in another.

Magic.

1

Nothing types, empty types and uninhabited types are all names for the type without any values. It is useful for strictly typed systems to produce expressions that always match the required type. For example, return expressions are often given a never type so that functions like this typecheck:

fn collatz_terminates(x: u64) -> bool {
let next: u64 = if x == 1 {
return true
} else if x % 2 == 0 {
x / 2
} else {
3 * x + 1
};
collatz_terminates(next)
}