David G. Andersen, Alex Snoeren, Hari Balakrishnan
Internet Measurement Conference, Miami, FL, October 2003
Time-varying congestion on Internet paths and failures due to software, hardware, and configuration errors often disrupt packet delivery on the Internet. Many aproaches to avoiding these problems use multiple paths between two network locations. These approaches rely on a path-independence assumption in order to work well; i.e., they work best when the problems on different paths between two locations are uncorrelated in time.
This paper examines the extent to which this assumption holds on the
Internet by analyzing 14 days of data collected from 30 nodes in the
RON testbed.
We examine two problems that manifest
themselves---congestion-triggered loss and path failures---and find
that the chances of losing two packets between the same hosts is
nearly as high when those packets are sent through an intermediate
node (60%) as when they are sent back-to-back on the same path (70%).
In so doing, we also compare two different ways of
taking advantage of path redundancy proposed in the literature: mesh
routing based on packet replication, and reactive routing based on
adaptive path selection.
[PostScript (293KB)] [Gzipped PostScript (74KB)]
Bibtex Entry:
@inproceedings{andersen2003best-path, author = "David G. Andersen and Alex Snoeren and Hari Balakrishnan", title = "{Best-Path vs. Multi-Path Overlay Routing}", booktitle = {Internet Measurement Conference}, year = {2003}, month = {October}, address = {Miami, FL} }