Managing Network Latency

network latency

Latency, the measure of how long it takes data to get from one device to another across a network, can be critical to the performance of connected embedded systems. For applications that rely on high-speed delivery of real-time data, latency can be a serious concern that puts safety and mission success at risk.

Wide area networks that include multiple hops over long-distance links can face latency in the hundreds of milliseconds, resulting in two-way voice conversations that feature awkward delays and interruptions. Latency on a switched Ethernet LAN is typically much lower, and measured in microseconds. Still, microseconds can matter to industrial applications that control motors based on inputs from networked sensors, or radar processing systems that track multiple fast-moving targets.

Gigabit Ethernet Switch Latency for Various Frame Lengths, RFC2544 Report

In recent years, new Ethernet switches have delivered not only improved link speeds, but also lower latency. Switches that offer “cut-through” forwarding can deliver sub-microsecond latency, making Ethernet a viable technology for a range of applications that previously required dedicated point-to-point links.

Increased link speed also enables converged networks, with enough capacity to carry traffic from multiple applications over a single cable. When real-time traffic shares a network with other applications, however, time-sensitive data can be delayed, causing increased latency, or unpredictable latency (jitter). Fortunately, many switches can be configured to classify and prioritize real-time application data to help ensure low and consistent latency for critical packets.

Download "Latency: Understanding Delays in Embedded Networks", to learn about the causes of latency in embedded Ethernet networks and how networking features can be used to manage and reduce it.