Congestion

The concept of congestion in the network layer is a very simple one. The performance of any system will degrade if the amount of work that the system is forced to do is more than it can cope with.

In this context, if there are too many packets present in a given part of the subnet, we say that the subnet is congested.

This situation is shown graphically in the following diagram.

It is clear from the diagram that performance degrades very sharply when congestion occurs.

The obvious question that needs to be asked is how does such congestion come about. This is essentially concerned with the number of IMP buffers that are being used within the system.

Regardless of the line capacity (both input and output), if the buffers are too slow to deal with the traffic, then queues will build up (i.e. congestion). At the other end of the scale however, is a situation where the IMP buffers can operate at an infinite speed. Under these circumstances congestion can still occur if the rate of incoming packets is in excess of the output line capacity.

There is a knock-on effect caused by congestion that must be considered. It is rather like what happens with a set of stacked dominoes. Since the congested IMP is not acknowledging receipt of packets, the IMP that is sending them is unable to free up its own much needed buffers (since it must keep trying to send the packets). As a result, congestion could backup across the whole network.

Obviously there must be ways of reducing the effects and minimising the possibility of congestion occuring. This area is known as congestion control.

There are many different ways to control congestion. Some of these are explained here :