Friday, March 29, 2024

Giving VoIP Traffic the Green Light, Part 1

Datamation content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

VoIP call quality isn’t always what it should be. Sometimes it is plagued
by jitter, echo, lag–even dropped calls. In the next two articles you’re
going to learn how to prioritize your VoIP traffic to get the best quality.
Linux has all the tools you need to do this. All that’s required from you is
an understanding of TCP/IP networking fundamentals–which we’ll talk about
today–and a traffic-shaping utility. Next week we’ll get into the hands-on
implementation, using the excellent utility Wondershaper.

Usually, when it comes to computer networking speed and capacity we think of
bandwidth, as in the more bandwidth the faster our network will be. But
bandwidth is merely one of several different factors affecting network performance.

Some definitions are in order:

Bandwidth is the maximum number of bits per second that your
LAN or Internet account supports. Think of it as pipeline diameter: the bigger
the pipeline, the more bits it can carry. For example, Fast Ethernet has an
upper bandwidth limit of 100 megabits (Mbps) per second. That’s bits, not bytes,
so when it’s expressed in bytes it doesn’t look so impressive: 12.5 megabytes
(MBps) per second.

(As a side note, ATA disk transfer speeds are so far beyond Fast Ethernet
speeds these days you might want to consider upgrading your LAN to Gigabit
Ethernet. For example, ATA 133 hard drives are rated at data transfer speeds
of 133 MBps, while SATA2 is 300 MBps. See Resources for more information.)

Latency, or delay, is the real bugaboo of VoIP quality. It
doesn’t matter how fat your pipeline is if you are plagued by high latency.
You can think of latency as staring down your nice big pipeline waiting for
the bits to come out. While there are many possible causes of latency, one
is the basic architecture of today’s computer networking, which we’ll get
to in a minute.

Throughput is the intersection of bandwidth and latency; the
actual number of bits transferred in a specific length of time.

TCP/IP networks (see below) are packet-switched, while traditional voice networks
are circuit-switched. When you make a call on a circuit-switched network, the
call is assigned a single dedicated un-shared circuit for its full duration.
You don’t experience network slowdowns, because when the network is filled to
capacity, new calls can’t be completed until someone hangs up. So for the most
part, voice calls over the PSTN (Public Switched Telephone Network) are good
quality and do not suffer from lag or interruptions. If part of your communication
is mangled, you say “huh?” and receive a re-transmission in real time.

Packet-switched networks operate differently. Data streams are broken into
small packets that march like a stream of ants across the network, often taking
different routes to their destination, where they are then reassembled. All
traffic streams share the same wires at the same time, which is good inasmuch
as it substantially increases the carrying capacity of the network.

This works fine for data transmissions, but less well for the quality of IP
voice calls, which require a smooth, uninterrupted, low-latency stream. You
have no control over what happens after your VoIP packets leave your network,
but at least you can give them a good start. You have a lot of control over
your LAN performance, which is important for your overall VoIP quality.

The TCP/IP protocol suite (or ‘stack’) is sometimes known as the Internet Protocol
Suite. TCP/IP is the most common usage, so we’ll stick with that. TCP/IP powers
virtually all computer networking these days, including the Internet. Prioritizing
your VoIP traffic means you’ll be manipulating packets based on fields in their
TCP/IP headers.

Let’s take a look at the components of TCP/IP:

IP (Internet Protocol) is the fundamental protocol of the TCP/IP protocol suite. IP provides the basic delivery service on which TCP/IP networks are built. All TCP/IP data flows through IP. IP defines the means to identify and reach a target computer. It is called both an unreliable protocol and connectionless protocol. IP does not perform error-checking or verify delivery; those jobs are handled by other protocols. In practice it is fast and reliable.

TCP (Transmission Control Protocol) provides end-to-end error detection and correction. This is what makes it possible to transfer gigabytes of files over the network without mistakes. TCP first establishes and verifies a connection; it won’t send data until it is sure the receiving host is ready to receive it. TCP then numbers each related packet in a data stream in sequence, and again does not send any data segments until the receiving host acknowledges the correct starting number. Each data segment is checksummed and acknowledged, so if a packet is lost or damaged TCP asks for it to be re-sent. When the network is congested a lot of packets are lost and re-sent.

UDP (User Datagram Protocol) is also called an unreliable,
connectionless protocol
because it has no error-checking or delivery guarantee.
UDP is fast and low-overhead, and is used for things like DNS (Domain Name
Service) and NTP (Network Time Protocol) that don’t send complex data streams,
and that need speed more than 100 percent reliability.

There is a lot of contention during a typical TCP/IP session; what
we want to do is push our VoIP packets to the head of the line. Come back next
week to learn how to do this.

This article was first published on LinuxPlanet.com.

Subscribe to Data Insider

Learn the latest news and best practices about data science, big data analytics, artificial intelligence, data security, and more.

Similar articles

Get the Free Newsletter!

Subscribe to Data Insider for top news, trends & analysis

Latest Articles