In the beginning was the circuit-switched voice network. Every call was given a dedicated connection for the duration of the call, like a train on a track, and life was good. Then came the Internet, and lo! bearded geeks did rampantly stuff data packets over analog circuits, and the result was like a crowded freeway with contentions and collisions. But these same circuits could handle far more traffic, even if it was congested and dangerous. And thus did e-mail, Web surfing, and online shopping come to the masses.
But this was not good enough for mad Web designers and demented marketing life-forms, who decreed that computers should be just like televisions, pushing Rich Content and Enhanced Multi-media Experiences (translation: commercials) into our mouths like baby birds. Fortunately, this was hampered by the nature of packet-switched networks and TCP/IP, which try hard to guarantee that data packets will arrive at their destinations, but not in any particular time frame or order. And thus we have online streaming video that does not stream, but stalls and jerks, and unsynchronized jittery audio, like a parody of bad educational films.
But, as with all technologies, it can be used for good or ill, and at long last streaming media is being employed for actually useful things like VoIP, videophones, and Internet conferencing. So now we have legitimate reasons to want our packet-switched networks to behave like circuit-switched networks. But (there are many buts inherent in this topic) as long as we’re stuck on IPv4 networks there isn’t a lot we can do about this. It takes a lot of clever routing and caching to create the equivalent of a dedicated high-occupancy lane on the information highway, because QoS in IPv4 is a joke. You can mark your packets any way you want, and most routers will ignore your wishes.
Linux is blessed with a truckload of software utilities for finding bottlenecks and other problems. Maybe you’ll be able to do something about the problem, maybe not, but at least you can find out what and where it is.
iperf is a nifty little program for measuring throughput, jitter and datagram loss. It has both client and server pieces, so it requires installation at both ends of the connection you’re measuring. I like to have three terminals laid out so I can see all of them at once. One for the client, one for the server, and one running tcpdump just to see all those packets zoom by. Interestingly, tcdump causes a significant slowdown, which you can see for yourself. First start up iperf in server mode, then run it from the client PC with these commands:
[email protected]ena:~$ iperf -s [email protected]:~$ iperf -c xena
By default iperf uses TCP/UDP port 5001, so make sure it’s not blocked. This is the result of a run without tcpdump running:
------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 85.3 KByte (default) ------------------------------------------------------------ [ 4] local 192.168.1.10 port 5001 connected with 192.168.1.76 port 471 [ 4] 0.0-10.0 sec112 MBytes 93.8 Mbits/sec
Pretty nice- that’s as good as you can get on Fast Ethernet. This is what happens when tcpdump is running:
[ 5] 0.0-10.0 sec 56.5 MBytes 47.3 Mbits/sec
By default, iperf flings TCP packets over your wires as fast as possible. A bi-directional test, which is the -d option, runs both ways:
[email protected]:~$ iperf -c xena -d [ 4] 0.0-10.0 sec109 MBytes 91.3 Mbits/sec [ 5] 0.0-10.0 sec 84.5 MBytes 70.8 Mbits/sec
Debugging multi-cast network problems can drive you buggy. iperf can test multi-cast performance by running several servers listening on your multi-cast address:
[email protected]:~$ iperf -su -B 184.108.40.206 [email protected]:~$ iperf -su -B 220.127.116.11 [email protected]:~$ iperf -c 18.104.22.168 -u -b 512k
You’ll want to set the -b value to a speed appropriate for your network, and use your own multi-cast IP address.
You may use your own files for testing throughput on compressed and uncompressed files by specifying the filenames on the client:
[email protected]:~$ iperf -c xena -F [filename]
Testing UDP is fun too. Stop the server with Ctrl+C, then run these commands:
[email protected]:~$ iperf -su [email protected]:~$ iperf -c xena -u [ 3] 0.0-10.0 sec 1.25 MBytes 1.05 Mbits/sec 0.003 ms0/ 893 (0%)
That’s quite a difference. In fact, I would use the word “sucks”. Why is UDP so slow? Because iperf’s default is 1.05 Mbits/second, so it’s not a network problem. We can try some different values to see what happens. Let’s tell it to use all available bandwidth:
[email protected]:~$ iperf -su [email protected]:~$ iperf -c xena -u -b 100m [ ID] IntervalTransfer BandwidthJitter Lost/Total Datagrams [ 4] 0.0-10.0 sec113 MBytes 95.0 Mbits/sec 0.008 ms 544/81389 (0.67%) [ 4] 0.0-10.0 sec 1 datagrams received out-of-order
That’s very good speed, and 0.67% datagram loss is insignificant. That’s a good clean connection. UDP has been promoted from a bit player to a protocol of importance, thanks to VoIP. VoIP traffic uses UDP instead of TCP because of the lower overhead. A bit of datagram loss doesn’t harm the quality of VoIP calls, but TCP’s dogged determination to deliver packets no matter how late or out-of-order can make a real hash of voice calls. Depending on network conditions and the quality of your IP phones, a VoIP call can tolerate as much as 10% UDP datagram loss.
Applications determine how many TCP or UDP packets are sent, and what size. To get a more real-world idea of performance, you can set the size of the UDP datagram to the same size that your applications use. This is easy to find- run tcpdump and filter for udp datagrams:
# tcpdump -p udp 11:41:17.618421 IP uberpc.alrac.net.ipp > 192.168.1.255.ipp: UDP, length 200
This example whales on your line by sending 200-byte datagrams at 100 Mbits/second:
[email protected]:~$ iperf -su -i 1 [email protected]berpc:~$ iperf -c xena -u -l 200 -b 100m [ 3] 0.0-10.0 sec106 MBytes 88.9 Mbits/sec 0.219 ms 2683/187644 (1.4%)
The -i option generates a progress display every second. Even more important to VoIP is the jitter value, which in this example is 0.219 milliseconds. That’s enough to be noticeable; and you’ll sound like you’re driving on a bumpy road.
Next week we’ll look at more ways to figure out what the heck is mucking up your network, and how to use iperf and other tools to debug Internet problems.
Tutorial courtesy of Enterprise Networking Planet.