By performing the sorting procedure in a slightly different manner, we could have found a different Huffman code. In the first re-sort, we could place a4 higher in the list, as shown in Table 3.6.

Now combine a1 and a3 into a1, which has a probability of 04. Sorting the alphabet a2, a4, a1 and putting a1 as far up the list as possible, we get Table 3.7. Finally, by combining a2 and a4 and re-sorting, we get Table 3.8. If we go through the unbundling procedure, we get the codewords in Table 3.9. The procedure is summarized in Figure 3.3. The average length of the code is

l = 4 × 2 + 2 × 2 + 2 × 2 + 1 × 3 + 1 × 3 = 22 bits/symbol

T A B L E 3 . 6 | Reduced four-letter alphabet. | ||

Letter | Probability | Codeword | |

a2 | 0.4 | c | 2 |

a4 | 0.2 | 1 | |

a1 | 0.2 | c | 1 |

a3 | 0.2 | c | 3 |

T A B L E 3 . 7 | Reduced three-letter alphabet. | ||

Letter | Probability | Codeword | |

a1 | 0.4 | 2 | |

a2 | 0.4 | c | 2 |

a4 | 0.2 | 1 |

**T A B L E 3 . 8** **Reduced two-letter alphabet**.

Letter | Probability | Codeword |

a2 | 0.6 | 3 |

a1 | 0.4 | 2 |

T A B L E 3 . 9 | Minimum variance Huffman code. | |

Letter | Probability | Codeword |

a1 | 0.2 | 10 |

a2 | 0.4 | 00 |

a3 | 0.2 | 11 |

a4 | 0.1 | 010 |

a5 | 0.1 | 011 |