Sunday, 25 March 2018


We know that it is virtually impossible to send any signal, analog or digital, over a distance without any distortion even in the most perfect conditions.
This is basically due to various impairments that can occur during the process of transmission as a result of an imperfect medium and/or environment. These errors can be classified into three main categories as given below:
·         Delay distortion
·         Attenuation
·         Noise
Delay Distortion
Delay distortion is caused because the signals of varying frequencies travel at different speeds along the medium. We know that any complex signal can be decomposed into different sinusoidal signals of different frequencies resulting in a frequency bandwidth for every signal. One property of signal propagation is that the speed of travel of the frequency is the highest at the center of this bandwidth, and lower at both ends. Therefore, at the receiving end, signals with different frequencies in a given bandwidth will arrive at different times. If the signals received are measured at a specific time, they will not measure up to the original signal resulting in its misinterpretation.
Attenuation is another form of distortion. In this case, as a signal travels through any medium, its strength decreases, just the way our voice becomes weak over a distance and loses its contents beyond a certain distance. Attenuation is very small at short distances. The original signal therefore can be recognized as such without too much of distortion. Attenuation, however, increases with distance. This is because some of the signal energy is absorbed by the medium. Attenuation is also higher at higher frequencies.
Noise is yet another component that poses a problem in receiving the signal accurately. We know that a signal travels as an electromagnetic signal through any medium. Electromagnetic energy that gets inserted somewhere during transmission is called noise. Apart from distortion and attenuation, noise is one of the major limiting factors in the performance of any communication system.
If the signal is carrying binary data, there can be two types of errors: single-bit errors and burst errors. In a single-bit error, a bit value of 0 changes to 1 or vice versa. In a burst error, multiple bits of a binary value are changed. In single-bit errors, a single bit of the data unit changes. Thus, effectively, either a 0 bit changes to 1, or a 1 bit changes to 0. single-bit errors are more likely in the case of parallel transmission because it is a possible that one of the eight wires carrying the bits has become noisy, resulting into corruption of a single bit for each byte.  
In contrast, a burst changes atleast two bits during data transmission because of errors. Note that burst errors can change any two or more bits in a transmission. These bits need not necessarily be adjacent bits. Burst errors are more likely in serial transmission, because the duration of noise is longer, which causes multiple bits to be corrupted.
There are a number of techniques used for transmission error detection and correction. The most common techniques used for this purpose.
Vertical Redundancy Check (VRC) or Parity Check
The Vertical Redundancy Check (VRC), also known as parity check, is quite simple. It is the less expensive technique as well. In this method, the sender appends a single additional bit, called the parity bit, to the message before transmitting it. There are two schemes in this: odd parity and even parity. In the odd parity scheme, given some bits, an additional parity bit is added in such a way that the number of 1s in the bits inclusive of the parity bit is odd. In the even parity scheme, the parity bit is added such that the number of 1s inclusive of the parity bit is even.
There is one problem with this scheme. This scheme can only catch a single bit error. If two bits reverse, this scheme will fail. For instance, if the first two bits in the bit stream change, we will get a stream 0000011, yielding a parity bit of 0 again. Clearly, parity checking can detect single-bit errors. However, if multiple bits of a message are changed due to an error (burst), parity checking would not work. Better schemes are required to trap burst errors.
Longitudinal Redundancy Check (LRC)
A block of bits is organized in the form of a list (as rows) in the Longitudinal Redundancy Check (LRC). Here, for instance, if we want to send 32 bits, we arrange them into a list of four rows. Then the parity bit for each column is calculated and a new row of eight bits is created. These become the parity bits for the whole block.
Cyclic Redundancy Check (CRC)
A mathematical algorithm is used on the data block to be sent to arrive at the Cyclic Redundancy Check (CRC) – a small block of bits which are appended to the data block and sent by the sender. At the destination, the receiver separates the data block, recomputes the CRC using the same algorithm and matches the received CRC with the computed CRC. A mismatch indicates an error.
The main features of CRC are as follows:
·         CRC is a very sturdy and better error detection method compared to others. The algorithm to compute the CRC is so chosen that given the length of the data block in bits, there are only a few and finite number of permutations and combinations for which the CRC is the same.
·         CRC is normally implemented in hardware (of the modem), rather than in software. This makes this operation very fast, though a little more expensive. Depending on the method of CRC used, the corresponding type of modem has to be used. For computing the CRC, two simple hardware components are used: an XOR gate and a shift register.
·         The data to be transmitted is divided into a number of blocks consisting of several bits each. After this, a block is treated as a mammoth string of 1s and 0s in a binary number. It is then divided by a prime number, and the remainder is treated as CRC. This is the normal method.

Previous Post
Next Post