Please fill the form below up and receive immediately an automated email with the instructions to download and install the trial Virtual Appliance!

This Virtual Appliance will be valid for 15 days from the day the download link is sent.

About you

Network performance: Links between latency, throughput and packet loss

When troubleshooting network degradation or outage, we need to find ways to measure the network performance and determine when the network is slow and for what is the root cause (saturation, bandwidth outage, misconfiguration, network device defect, etc..).

Whatever the approach you take to the problem (traffic capture with network analyzers like Wireshark, SNMP polling with tools such as PRTG or Cacti or generating traffic and active testing with tools such as SmokePing or simple ping or trace route to track network response times), you need indicators: these are usually called metrics and are aimed at putting tangible figures to reflect the performance status of the network. 

This article explains what reality 3 major network performance indicators (latency, throughput and packet loss) reflect and how they interact with each other in TCP and UDP traffic streams.

  • Latency is the time required to vehiculate a packet across a network.
    • Latency may be measured in many different ways: round trip, one way, etc…
    • Latency may be impacted by any element in the chain which is used to vehiculate data: workstation, WAN links, routers, local area network, server… and ultimately it may be limited, for large networks, by the speed of light.
  • Throughput is defined as the quantity of data being sent/received by unit of time.
  • Packet loss reflects the number of packets lost per 100 of packets sent by a host.

This can help you understand the mechanisms of network slowdowns.  


UDP is a protocol used to carry data over IP networks. One of the principles of UDP is that we assume that all packets sent are received by the other party (or such kind of controls is executed at a different layer, for example by the application itself).

In theory or for some specific protocols (where no control is undertaken at a different layer – e.g. one-way transmissions), the rate at which packets can be sent by the sender is not impacted by the time required to deliver the packets to the other party (= latency). Whatever that time is, the sender will send a given number of packets per second, which depends on other factors (application, operating system, resources, …).

To learn how to troubleshoot network and application performance degradations in 4 easy steps, you can download our Performance Troubleshooting Guide: 

Performance Troubleshooting Guide


TCP is a more complex protocol as it integrates a mechanism which checks that all packets are correctly delivered. This mechanism is called acknowledgment: it consists in having the receiver sending a specific packet or flag to the sender to confirm the proper reception of a packet.

TCP Congestion Window

For efficiency purposes, not all packets will be acknowledged one by one: the sender does not wait for each acknowledgment before sending new packets. Indeed, the number of packets which may be sent before receiveing the corresponding acknowledgement packet is managed by a value called TCP congestion window.

How the TCP Congestion Window impacts the throughput

If we make the hypothesis that no packet gets lost; the sender will send a first quota of packets (corresponding to the TCP congestion window) and when it will receive the acknowledgment packet, it will increase the TCP congestion window; progressively the number of packets that can be sent in a given period of time will increase (throughput). The delay before acknowledgment packets are received (= latency) will have an impact on how fast the TCP congestion window increases (hence the throughput).

When latency is high, it means that the sender spends more time idle (not sending any new packets), which reduces how fast throughput grows.

The test values (source: are very explicit:

Relation between throughput and latency in TCP

Round trip latency

TCP Throughput


93.5 Mbps


16.2 Mbps


8.07 Mbps


5.32 Mbps


How the TCP Congestion handles missing acknowledgment packets

The TCP congestion window mechanism deals with missing acknowledgment packets as follows: if an acknowledgement packet is missing after a period of time, the packet is considered as lost and the TCP congestion window is reduced by half (hence the througput too – which corresponds to the perception of limited capacity on the route by the sender); the TCP window size can then restart increasing if acknowledgment packets are received properly.

Packet loss will have two effects on the speed of transmission of data:

  1. Packets will have to be retransmitted (even if only the acknowledgment packet got lost and the packets got delivered)
  2. The TCP congestion window size will not allow an optimal throughput.

The impact of packet loss and latency on TCP throughput


With 2% packetloss, the TCP throughput is between 6 and 25 lower than with no packet loss.


Round trip latency

TCP Throughput with no packet loss

TCP Throughput with 2% packet loss

0 ms

93.5 Mbps

3.72 Mbps

30 ms

16.2 Mbps

1.63 Mbps

60 ms

8.7 Mbps

1.33 Mbps

90 ms

5.32 Mbps

0.85 Mbps

This will apply whatever the reason for losing acknowledgment packets is (genuine congestion, server issue, packet shaping, …)

You can learn more about how you can measure these metrics through traffic analysis by following this link

Topics: NPM, Network troubleshooting

Posted by Boris Rogier on 26 mai 2016
Boris Rogier
Find me on:

Receive our Blog Articles