Convolutional Codes: Viterbi decoding

INTRODUCTION

Convolutional codes are a type of error correcting codes targeted to fix the errors introduced by the channel. In a code block, the source data stream is divided in several blocks of length k and each block is coded with a code word of n bits. Therefore, the code efficiency or rate is R=k/n.

Convolutional codes use shift registers as encoders: the source data is divided in frames of ko bits and the M+1 frames from the source are encoded in a frame of no bits, where M is the length of the shift registers:
convolutional_encoder
When a new data frame is read, the old one is shifted towards the right and the new code word is computed.
For most of the convolutional codes, the length of the source frame is ko=1 (this is a binary convolutional code).
Therefore, a convolutional encoder is finite-state machine: if there are M memory elements, the state diagram will have 2M states, named So, S1,… S2^M-1. The state diagram is obtained, in the easiest cases, from the analysis of the enconder’s circuit and it’s very similar to the Markov processes and memory channels ones.
 
Trellis diagrams can also be used to build the corresponding convolutional decoder. The properties of error correcting codes for a convolutional code are given by the adversary paths in the Trellis diagram: two paths are adversary if both start in a given state Si, and both finish in another state Sf, without travelling through the same states.
Therefore, the convolutional encoder aims to compute the Hamming distance between the adversary paths in the Trellis diagram.
In the next section, we are going to study a practical example of the Viterbi algorithm; the maximum-likelihood algorithm based on convolutional codes. We will explain its performance by using a Java Applet that runs it. Therefore, we will simulate the message codification at the transmitter by using Generator Polynomials, the noisy channel affecting the code words and the high efficacy of this error correcting algorithm.

VITERBI ALGORITHM EXAMPLE

In this example, we will use the following binary convolutional enconder with efficiency 1/2, 2 registers and module-2 arithmetic adders:
binary_encoder
The input message will be the code 1101. The generator polynomials are 1+x+x2 and 1+x2, which module-2 representation is 111 and 101, respectively. These polynomials are obtained from the logic applied to the previous encoder. Therefore, the encoded message is:
11, 01, 01, 00, 10, 11
This is the message that will be transmitted through the channel. We have computed this process in the Applet:
input_message
The channel will produce 4 errors in the code and the received sequence will change in 4 bits, in comparison with the transmitted one. We have obtained the Trellis diagram in the Applet which joins together the processors. In this case, 4 states are used, so there are 4 processors. The states are represented from the the left-hand side (So) as the initial state and the corresponding outputs which are represented in the grey boxes:
trellis_diagram
The following two images show the state diagram and the corresponding Trellis diagram, respectively:
state_diagram
state_diagram_2
The most likely metrics or cummulative Hamming distances are shown in each node of the processor in yellow; we will keep the ones with the smallest metric. The selected paths are highlighted in pink while the rejected ones are in blue:
trellis_paths
From the previous example, as we start from  S(the initial state), we know that the metrics or Hamming distances between the received symbol (01) and the symbol that it should have been received (00 or 01) is 1.
We repeat the same process in the rest of time instants, so we compare the received symbol with the one that it should have been received (this is the output of the processor node), calculate the Hamming distance and, finally, obtain the cummulative metric:
hamming_distance
Those branches corresponding to the adversary paths will be solved by selecting the path with the smallest cummulative metric. In the case of two adversary paths with the same metric, one of them will be randomly chosen.
After processing all the received code words, the algorithm will determine the most likely sequence that was transmitted.
The following image shows the most likely transmitted sequence in red for the first 4 bits which, in this case, corresponds with the original transmitted sequence:
decoded_trellis

At this stage, the process is repeated by introducing more bits to the encoder til we decode the full message.

Did you find this post useful? Do you have any questions or suggestions? Leave a comment below or contact us by email:contact@behindthesciences.com

Add a Comment

Your email address will not be published. Required fields are marked *

Show Buttons
Hide Buttons