The input to the problem, for an integer, consists of a sequence of non-negative weights. The output is a rooted binary tree with internal nodes, each having exactly two children. Such a tree has exactly leaf nodes, which can be identified with the input weights. The goal of the problem is to find a tree, among all of the possible trees with internal nodes, that minimizes the weighted sum of the external path lengths. These path lengths are the numbers of steps from the root to each leaf. They are multiplied by the weight of the leaf and then summed to give the quality of the overall tree. This problem can be interpreted as a problem of constructing a binary search tree for ordered keys, with the assumption that the tree will be used only to search for values that are not already in the tree. In this case, the keys partition the space of search values into intervals, and the weight of one of these intervals can be taken as the probability of searching for a value that lands in that interval. The weighted sum of external path lengths controls the expected time for searching the tree. Alternatively, the output of the problem can be used as a Huffman code, a method for encoding given values unambiguously by using variable-length sequences of binary values. In this interpretation, the code for a value is given by the sequence of left and right steps from a parent to the child on the path from the root to a leaf in the tree. Unlike standard Huffman codes, the ones constructed in this way are alphabetical, meaning that the sorted order of these binary codes is the same as the input ordering of the values. If the weight of a value is its frequency in a message to be encoded, then the output of the Garsia–Wachs algorithm is the alphabetical Huffman code that compresses the message to the shortest possible length.
Algorithm
Overall, the algorithm consists of three phases:
Build a binary tree having the values as leaves but possibly in the wrong order.
Compute each leaf's distance from the root in the resulting tree.
Build another binary tree with the leaves at the same distances but in the correct order.
The first phase of the algorithm is easier to describe if the input is augmented with two sentinel values, at the start and end of the sequence. The first phase maintains a forest of trees, initially a single-node tree for each non-sentinel input weight, which will eventually become the binary tree that it constructs. Each tree is associated with a value, the sum of the weights of its leaves makes a tree node for each non-sentinel input weight. The algorithm maintains a sequence of these values, with the two sentinel values at each end. The initial sequence is just the order in which the leaf weights were given as input. It then repeatedly performs the following steps, each of which reduces the length of the input sequence, until there is only one tree containing all the leaves:
Find the first three consecutive weights,, and in the sequence for which. There always exists such a triple, because the final sentinel value is larger than any previous two finite values.
Remove and from the sequence, and make a new tree node to be the parent of the nodes for and. Its value is.
Reinsert the new node immediately after the rightmost earlier position whose value is greater than or equal to. There always exists such a position, because of the left sentinel value.
To implement this phase efficiently, the algorithm can maintain its current sequence of values in any self-balancing binary search tree structure. Such a structure allows the removal of and, and the reinsertion of their new parent, in logarithmic time. In each step, the weights up to in the even positions of the array form a decreasing sequence, and the weights in the odd positions form another decreasing sequence. Therefore, the position to reinsert may be found in logarithmic time by using the balanced tree to perform two binary searches, one for each of these two decreasing sequences. The search for the first position for which can be performed in linear total time by using a sequential search that begins at the from the previous triple. It is nontrivial to prove that, in the third phase of the algorithm, another tree with the same distances exists and that this tree provides the optimal solution to the problem. But assuming this to be true, the second and third phases of the algorithm are straightforward to implement in linear time. Therefore, the total time for the algorithm, on an input of length, is.
History
The Garsia–Wachs algorithm is named after Adriano Garsia and Michelle L. Wachs, who published it in 1977. Their algorithm simplified an earlier method of T. C. Hu and Alan Tucker, and it ends up making the same comparisons in the same order as the Hu–Tucker algorithm. The original proof of correctness of the Garsia–Wachs algorithm was complicated, and was later simplified by and.