Latency: Understanding Delays in Embedded Networks

Latency: Understanding Delays in Embedded Networks
Latency: Understanding Delays in Embedded Networks
White Paper
November 30, 2018

Latency: Understanding Delays in Embedded Networks

Standards-based Ethernet networks provide cost-effectively and high-performance interconnect, enabling networks that carry traffic from many systems over shared cabling. For real-time embedded systems that process and respond to sensor data, a key measure of network performance is how long it takes data to get from one device to another – in other words, latency. An application such as email can take several seconds to deliver a message, with delays introduced by servers, switches, and routers, before it lands in the recipient’s inbox. In an embedded network with a single Ethernet switch, messages can be delivered in microseconds. For a specific application, latency is typically measured by sending traffic between two endpoints and reporting the average delay between sending and receiving each packet.

Closely related to the concept of latency is “jitter”, the variability of latency. While latency is reported as the average time between data being sent and received, this delay can vary from one packet to another based on network conditions. In systems that send packets at regular intervals, variability in inter-packet arrival times can present challenges for a receiver that processes those packets. If latency increases, the receiver may be idle while waiting for a packet to arrive. Then, when a bunch of packets arrives close together, the receiver may not have enough processing capacity to keep up or enough buffer to store all the data while it is processed. For this reason, understanding the variability in the delay between packet arrivals can be just as important as their end-to-end latency.

Login and download the white paper to learn more.

Read about:

  • Ethernet switching
  • High-performance embedded computing
  • Time-sensitive networking