Home Exam 1: Evaluation of Congestion Control for Sharing the Resources over a Common Bottleneck

In this assignment you will evaluate the performance of different congestion control schemes for data delivery from a cloud provider to home clients.

  • Compare and evaluate the performance of the Cubic and BBR congestion controls over a bottleneck.

  • Write a report where you present the evaluation and the main conclusions.

  • Create a poster (2 x A3 pages) that the group will present on March 16th.

Scenario /  Testbed

The assignment will be graded based on the groups’ ability to produce useful and correct information within the boundaries of the given time and resources.

You must design a set of experiments to be performed on the testbed provided for the assignment.

The testbed scenario is the case of a cloud service provider that needs to choose a congestion control for providing services to clients in homes and businesses. The data centre of the provider is well provisioned with resources and bandwidth, so the assumption is that the bottleneck usually is located at the edge close to the client. The below figure gives a visual presentation of the scenario.

scenario-inf5072.png

For creating a testbed emulating this scenario, we will provide each group with a pair of virtual machines hosted at different Amazon EC2 server farms. You will need to create a bottleneck on the receiver machine, limiting incoming traffic to the target capacity.The below figure explains how this setup should be.

scenario-inf5072-vms.png

Consult the course Amazon VM and Congestion Control FAQ for details on how this can be set up in practise.

Since the virtual machines have built-in capacity limitations based on the instance pricing, we must make sure that we create a bottleneck that is low enough that you don’t end up measuring the token bucket bandwidth-limitation of the sender virtual machine. You must therefore make sure that the capacity at the bottleneck never exceeds 10Mbps.

For this assignment, we assume that the target of the service provider is to deliver large files (greedy flows) and that the main purpose of the measurement is to measure the fairness, packet loss rates, achieved throughput and oscillations in throughput.

You may choose the tools for performing the experiments and analyse the results based on what you are familiar with and what you deem most appropriate for the purpose. The important aspect is that you are able to produce a complete and clear report at the end of the assignment period.

To test how the different congestion control algorithms react as new traffic arrive on the bottleneck, you should use “staggered start” of your sources. That means starting one flow at a time and letting them stabilise before adding a new flow.

You should evaluate each congestion control algorithm in competition with other flows running only the same algorithm and in combination with flows running the other algorithm(s) you are evaluating.

We encourage you to discuss the challenges and techniques across groups to reduce the overhead in attaining a new field of knowledge. Copying of code, scripts or experimental results, however, will be counted as cheating.

Report

You must write up the results as a technical report of no more than 4 pages in ACM format. It is expected that such a report includes the core elements presented in the lectures under  “A systematic approach to performance evaluation”.The results must be based on your own experiments and your own data.

The report is evaluated by writing quality, clarity of presentation, by the trustworthiness and correctness of the results. The evaluation does not consider whether related work (citations of other papers) is included.

Evaluation Details

In our evaluation of the reports, we will focus on the following elements:

  • Choice of metrics, workloads, system configuration parameters and methodology for the experiments

  • Use of statistical sound methods when analysing the data

  • Disposition of the available time (ability to collect and present useful information within the boundaries of the available resources)

  • Objectivity in defining the work, choosing metrics and workloads, in the analysis and in presenting the results

  • Transparency of reporting (exposure of assumptions and limitations to the reader)

  • Clarity of presentation

Bonus elements:

  • Analysis of metrics, beyond the core metrics listed above, that helps illustrate the qualities of the different congestion control mechanisms.

  • Analysis of the performance with a commonly used AQM (like CoDel or PIE) instead of basic tail-drop FIFO queues

  • Greater variation of system configurations (RTTs, bottleneck link rates etc.)

  • Evaluate one or more additional significant congestion control mechanism.

 

Formalities

The deadline for handing in your assignment is: Monday, March 13th at (23:59:59.999).

Deliver your report (as PDF) at https://devilry.ifi.uio.no/.

The groups should also prepare a poster (2 x A3 pages) and a quick talk (max 5 minutes without slides) where you pitch your poster for the class on March 16th. Name the poster with your group name, and e-mail the poster by email to inf5072@ifi.uio.no no later than noon (12:00) on March 15th. We will then print the poster for you.

For questions and course related chatter, we have created a Slack space:

https://mpglab.slack.com/messages/inf5072/

 

There will be a prize for best poster/presentation (awarded by an independent panel and independent of the grade).

Please check the Amazon VM and congestion control FAQ page for updates and FAQ.

For questions please contact: inf5072@ifi.uio.no

 

Resources:

BBR from ACM Queue: http://queue.acm.org/detail.cfm?id=3022184

BBR patch set (for detailed reference): https://lwn.net/Articles/701177/

Netem wiki page: https://wiki.linuxfoundation.org/networking/netem

 

Publisert 23. feb. 2017 15:24 - Sist endret 10. mars 2017 10:57