Bandwidth Settings
We use BESS to emulate two network set- tings with 8 Mbps bottleneck bandwidth (which we refer to as a highly-constrained) and 50 Mbps (which we refer to as a moderately-constrained). We choose these bandwidths because: (a) 50 Mbps is the median broadband speed experienced by more than half the countries in the world today and (b) 8 Mbps represents the bottom 10% percentile of country-level median bandwidths. 8 Mbps is also approximately the bandwidth that a 2K video would consume, allowing us to examine how contentious video services can be in a scenario where they can consume the entire link bandwidth. While these are the primary bandwidths Prudentia uses to evaluate fairness, in section 6 of the paper we run a one-off evaluation to examine how fairness evolves at other bandwidths. Prudentia’s regular iterations over services (http://www.internetfairness.net) only include these two settings because including more settings would multiplicatively increase the time to cycle across all-pairs of services.
Queue Sizing
Finally, we set the queue size of our Drop-tail FIFO bottleneck queue to approximately 4×BDP, based on input from large content providers who said that those are the buffer sizes they see in practice, and past work which implies that queues are at least this big. In section 6 of the paper we briefly examine the effect an even larger buffer would have on fairness.
RTT Settings
We normalize round-trip times between services to 50ms; all services we tested had an RTT to/from the testbed of ≤ 50ms and we used the software switch to insert additional delay for all services to normalize to 50ms. We selected 50ms as the highest RTT we recorded for a service was 40ms and we can only increase, not reduce, the delay experienced by a service. For services that use multiple flows we normalize the RTT based on the first flow of that service.
Background Noise
Although we fully control the client’s access link, we do not control what happens over the Internet. Hence, it is impossible for us to prevent upstream bandwidth bottlenecks, throttling, or sources of loss. However, we do mitigate these effects using two techniques. First, to detect upstream throttling, we run all services ‘solo’ to detect their maximum transfer rate in the absence of contention; only one service is throttled either by its server or network upstream (OneDrive, which should have achieved higher throughput, see Table 1). Second, to mitigate the effects of upstream congestion caused by transient traffic, we run multiple experiments between every pair and repeat experiments every two weeks; we also discard any experiments with more than 0.05% packet loss external to our testbed. If we see experiments with high variability or a large number of ‘outlier’ results, our scheduler automatically re-queues the service pair for additional testing to achieve stronger statistical significance, up to a maximum of 30 trials.