The Shannon-Fano Algorithm

A Shannon-Fano tree is built according to a specification designed to define an effective code table. The actual algorithm is simple:

  • For a given list of symbols, develop a corresponding list of probabilities or frequency counts so that each symbol’s relative frequency of occurrence is known
  • Sort the lists of symbols according to frequency, with the most frequently occuring symbols at the top and the least common at the bottom.
  • Divide the list into two parts, with the total frequency counts of the upper half being as close to the total of the bottom half as possible.
  • The upper half of the list is assigned the binary digit 0, and the lower half is assigned the digit 1. This means that the codes for the symbols in the first half will all start with 0, and the codes in the second half will all start with 1
  • Recursively apply the steps 3 and 4 to each of the two halves, subdividing groups and adding bits to the codes until each symbol has become a corresponding code leaf on the tree.

Table of symbol frequencies

SymbolCount
A15
B7
C6
D6
E5
Table of symbol frequencies

Putting the dividing line between symbols B and C assigns a count of 22 to the upper group and 17 to the lower, the closest to exactly half. This means that A and B will each have a code that starts with a 0 bit, and C, D, and E are all going to start with a 1 as shown:

The Shannon-Fano Algorithm
The Shannon-Fano Algorithm

Subsequently, the upper half of the table gets a new division between A and B, which puts A on a leaf with code 00 and B on a leaf with code 01. After four division procedures, a table of codes results. In the final table, the three symbols with the highest frequencies have all been assigned 2-bit codes, and two symbols with lower counts have 3-bit codes as shown next

The Shannon-Fano Algorithm
The Shannon-Fano Algorithm

That symbols with the higher probability of occurence have fewer bits in their codes indicates we are on the right track. The formula for information content for a given symbol is the negative of the base two logarithm of the symbol’s probability. For our theoretical message, the information content of each symbol, along with the total number of bits for that symbol in the message, are found in the following table.

SymbolCountInfo ContInfo Bits
A151.3820.68
B72.4817.35
C62.7016.20
D62.7016.20
E52.9614.82
The Shannon-Fano Algorithm

The information for this message adds up to about 85.25 bits. If we code the characters using 8-bit ASCII characters, we would use 39 × 8 bits, or 312 bits. Obviously there is room for improvement.

When we encode the same data using Shannon-Fano codes, we come up with some pretty good numbers, as shown below

SymbolCountInfo Cont.Info BitsSF SizeSF Bits
A151.3820.68230
B72.4817.35214
C62.7016.20212
D62.7016.20318
E52.9614.82315
The Shannon-Fano Algorithm

With the Shannon-Fano coding system, it takes only 89 bits to encode 85.25 bits of information. Clearly we have come a long way in our quest for efficient coding methods. And while Shannon-Fano coding was a great leap forward, it had the unfortunate luck to be quickly superseded by an even more efficient coding system: Huffman coding.

Leave a Comment