## Game trees under a partial order part 3: Alpha Beta pruning

In this last post of this series of three I will discuss alpha beta pruning. The context will be evaluation functions on game trees that take values in a distributive lattice. Of course a prime example would be the linear lattice $\mathscr{L}(X, \leq)$ over the partially ordered set $(X, \leq)$ but this post does not depend on specific features of this linear lattice. The result is a description of alpha beta pruning for distributive lattices that is a slightly improved version of that given by Ginsberg et al in their paper “Alpha Beta pruning under partial orders” from 2001.

Alpha Beta pruning is a way to compute the value of an evaluation function on a game tree for the root node by visiting as little child nodes as necessary. See the previous post for a more extensive introduction to evaluation functions. Here I will only briefly recall the recursive definition of an evaluation function $f$ on a game tree that takes values in a lattice $L$. I will assume that $L$ has a minimal and maximal element denoted by $\mathbf{0}$ and $\mathbf{1}$. The normal forms $\textrm{CNF}(X, \leq)$ and $\textrm{DNF}(X, \leq)$ of the linear lattice are examples of this.

1. The value of $f$ for a leaf node is given by some heuristic game evaluation function (e.g. chess piece values).
2. At an inner min node (where the minimizing player is to make a move) the value of $f$ is defined recursively as the greatest lower bound in $L$ of all values of $f$ on direct child nodes.
3. At an inner max node the value of $f$ is defined as the least upper bound of all values of $f$ on direct child nodes.

This definition leads to the following algorithm for the computation of $f$. This algorithm is also the basis of further considerations on ways to prune parts of the game tree that are irrelevant for the computation of $f$ on the root node.

function f(node)
local value
if node is a leaf node then
value = heuristic position value of node
elseif node is a min node then
value = 1
for child in all children of node do
value = value ∧ f(child)
end
else
value = 0
for child in all children of node do
value = value ∨ f(child)
end
end
return value
end

The structure of this naive algorithm is simple. To compute the value at a min node start with the maximal value $\mathbf{1}$ then iterate over all child nodes and adjust the node’s value by taking the greatest lower bound with the value of a child node. At a max node the procedure is similar but starts off with the minimal value $\mathbf{0}$ and repeatedly takes the least upper bound. The computation recurses depth first left to right over all child nodes, where “left to right” is according to the enumeration order of child nodes. Move ordering does not play a role in this naive algorithm but it can play a big role in alpha beta pruning in terms of the number of pruned nodes. I will not address move ordering in this post. During execution of the algorithm the state of the computation is schematically depicted below. As in the example in the previous post I will assume that the top node is a min node.

Intermediate values in the depth first left to right computation of an evaluation function

Here $x_1, x_2, \dotsc$ and $y_1, y_2, \dotsc$ are the intermediate values that are computed for the inner min and max nodes respectively. The $\star$ symbol indicates all subtrees located to the left of the current path of computation in the tree. These are the subtrees for which a value is already (recursively) computed. When the computation progresses each intermediate value $x_j$ descends from $\mathbf{1}$ and each $y_j$ ascends from $\mathbf{0}$. When we would cut the computation short at the point depicted above then the algorithm would track back up and yield the final outcome

$x_1 \wedge (y_1 \vee (x_2 \wedge \dotsc \vee (x_n \wedge y_n)))$

at the root node. The essential point of alpha beta pruning is that it identifies when this intermediate value remains constant even when $y_n$ gets larger in the remainder of the computation. Because in that case any further children of the node marked $y_n$ in the diagram will not contribute to the final result and can be pruned (skipped). In the picture above the search path ends with a max node but that is irrelevant for alpha beta pruning as we will see.

A lattice $L$ has a canonical partial order where $x \leq y$ is equivalent to $x \wedge y = x$ (or alternatively $x \vee y = y$). The lattice operators $\wedge$ and $\vee$ indeed result in the greatest lower bound and least upper bound with respect to this ordering. The following lattice property is the key point in alpha beta pruning.

Lemma: When $x, y, z \in L$ and $y \leq z$ then $x \wedge y \leq x \wedge z$ and $x \vee y \leq x \vee z$.

Since $y \leq z$ the greatest lower bound $x \wedge y$ is also a lower bound for $x$ and $z$ and is therefore less than or equal to the greatest lower bound $x \wedge z$. The other inequality is derived similarly. For sequences $x_1, x_2, \dotsc$ and $y_1, y_2, \dotsc$ in $L$ define sequences $\alpha$ and $\beta$ recursively by

1. $\alpha_1 = x_1 \wedge y_1$ and $\alpha_{j+1}$ is obtained from $\alpha_j$ by the substitution $y_j \leftarrow y_j \vee (x_{j+1} \wedge y_{j+1})$.
2. $\beta_1 = x_1$ and $\beta_{j+1}$ is obtained from $\beta_j$ by the substitution $x_j \leftarrow x_j \wedge (y_j \vee x_{j+1})$.

So the sequence $\alpha$ starts with

$\alpha_1 = x_1 \wedge y_1$
$\alpha_2 = x_1 \wedge (y_1 \vee (x_2 \wedge y_2))$
$\alpha_3 = x_1 \wedge (y_1 \vee (x_2 \wedge (y_2 \vee (x_3 \wedge y_3))))$
$\vdots$

and the sequence $\beta$ with

$\beta_1 = x_1$
$\beta_2 = x_1 \wedge (y_1 \vee x_2)$
$\beta_3 = x_1 \wedge (y_1 \vee (x_2 \wedge (y_2 \vee x_3)))$
$\vdots$

The following inequalities are a direct consequence of our lemma above. The sequences $\alpha$ and $\beta$ satisfy

$\alpha_j \leq \alpha_{j+1} \leq \beta_{j+1} \leq \beta_j$

for all indices $j$. In particular $\alpha$ is an ascending sequence, $\beta$ is a descending sequence, each $\alpha_j$ is a lower bound for $\beta$ and each $\beta_j$ is an upper bound for $\alpha$. If $\alpha_j = \beta_j$ or $\alpha_j = \beta_{j+1}$ for some index $j$ then both sequences are constant from that point onwards. The values in the sequences $\alpha$ and $\beta$ are precisely those obtained from aborting the computation of the evaluation function for the root node at min node $j$ (sequence $\alpha$) or at max node $j$ (sequence $\beta$). Hence further nodes can be pruned from the computation as soon as both sequences yield equal values.

Alpha Beta pruning (strong version): At the first index $j$ where either $\alpha_j = \beta_j$ or $\alpha_j = \beta_{j+1}$ further subnodes of max node $j$ or min node $j+1$ can be pruned.

Note that this formulation of alpha beta pruning resembles “classical” alpha beta pruning for totally ordered sets in that we have an ascending sequences of high values and a descending sequence of low values and nodes can be pruned as soon as these values coincide. There is also an unfortunate difference between the two. In classical alpha beta pruning the high and low values are easy to compute progressively as a simple maximum or minimum value. In contrast high and low values in the pruning algorithm above are not so simple to compute. Since they are computed deepest node first it requires constant back tracking. Moreover, computation of $\wedge$ and $\vee$ can be much more expensive than taking simply the maximum or minimum of two numbers.

Also note that the distributive property of the lattice $L$ is not used at all in this version of alpha beta pruning! This does not contradict the results of Ginsberg et al since they formulate alpha beta pruning (in particular “deep pruning”) differently. I will briefly come back to this point at the end of this post.

There is a weaker version of alpha beta pruning based on alternative sequences $\alpha'$ and $\beta'$ that are easier to compute. The consequence is however that it may take longer before the ascending high values and descending low values allow cut off, resulting in less aggressive pruning. The weaker estimates for high and low values follow from the following identities (where the distributive property of $L$ is now essential). Let $y_0 = \mathbf{0}$ and define two new sequences $\alpha'$ and $\beta'$ by

$\alpha'_j = y_0 \vee \dotsc \vee y_{j-1}$ and
$\beta'_j = x_1 \wedge \dotsc \wedge x_j$.

So $\alpha'_j$ is the least upper bound of the values of all max nodes above min node $j$ and $\beta'_j$ is the greatest lower bound of the values of all min nodes above max node $j$. If we take $\alpha_0 = \mathbf{0}$ and $\beta_0 = \mathbf{1}$ then for all indices $j \geq 1$ the following equalities hold:

$\alpha_j = \alpha_{j-1} \vee (\beta'_j \wedge y_j)$ and
$\beta_j = \beta_{j-1} \wedge (\alpha'_j \vee x_j)$.

These can be proved by induction and the distributive property of the lattice $L$. Taking $y_j = x_j$ in the first equation and $x_j = y_{j-1}$ in the second results in

$\beta_j = \alpha_{j-1} \vee \beta'_j$ and
$\alpha_{j-1} = \beta_{j-1} \wedge \alpha'_j$.

Comparing these equalities it follows that $x_j \leq \alpha'_j$ implies $\beta_j = \alpha_{j-1}$ and $\beta'_j \leq y_j$ implies $\alpha_j = \beta_j$. This leads to the following version of alpha beta pruning.

Alpha Beta pruning: If the value $x_j$ of min node $j$ gets less than or equal to $\alpha'_j$ or the value $y_j$ of max node $j$ gets greater than or equal to $\beta'_j$ then further child nodes can be pruned.

This is a slightly different formulation of alpha beta pruning than Ginsberg et al use in their paper. What they call (shallow or deep) pruning comes down to either $x_j \leq y_k$ for some $k < j$ at min node $j$ or $x_k \leq y_j$ for some $k \leq j$ at max node $j$. These are stronger conditions than those we found above and will therefore prune fewer nodes in general.

Here is an algorithmic formulation of alpha beta pruning. The value of the root node is computed by f(root, 0, 1).

function f(node, α, β)
local value
if node is a leaf node then
value = heuristic position value of node
elseif node is a min node then
value = 1
for child in all children of node do
value = value ∧ f(child, α, β ∧ value)
if value ≤ α then break end
end
else
value = 0
for child in all children of node do
value = value ∨ f(child, α ∨ value, β)
if value ≥ β then break end
end
end
return value
end

To sum up the results of this post: I described two versions of alpha beta pruning that allow to prune nodes from the computation of an evaluation function with values in a distributive lattice at the root node. The first, strong, version has the disadvantage that it is computationally more difficult than the second. Both versions are stronger (potentially prune more nodes) than the method described in the paper by Ginsberg et al.