The Internet is increasingly a platform for online servicessuch as email, Web search, social networks, and virtual worldsrunning on rack after rack of servers in data centers. The servers not only communicate with end users, but also with each other to analyze data (for example, to build a search index) or compose Web pages (for example, by combining data from multiple backend servers). With the advent of large data centers, the study of the networks that interconnect these servers has become an important topic to researchers and practitioners alike.
Data-center networking presents unique opportunities and challenges, compared to traditional backbone and enterprise networks:
In light of these new characteristics, researchers have been revisiting everything in networkingfrom addressing and congestion control to routing and the underlying topologywith the unique needs of data centers in mind.
The following paper presents one of the first measurement studies of network traffic in data centers, highlighting specifically the volatility of the traffic even on a relatively small timescale. These observations led the authors to design an "agile" network engineered for all-to-all connectivity with no contention inside the network. This gives data-center operators the freedom to place applications on any servers, without concern for the performance of the underlying network. Having an agile network greatly simplifies the task of designing and running online services.
More generally, the authors propose a simple abstractiona single "virtual" layer-two switch (hence the name "VL2") for each service, with no interference with the many other services running in the same data center. They achieve this goal through several key design decisions, including flat addressing (so service instances can run on any server, independent of its location) and Valiant Load Balancing (to spread traffic uniformly over the network). A Clos topology ensures the network has many paths between each pair of servers. To scale to large data centers, the servers take responsibility for translating addresses to the appropriate "exit point" from the network, obviating the need for the networking equipment to keep track of the many end hosts in the data center.
This paper is a great example of rethinking networking from scratch, while coming full circle to work with today's equipment.
In addition to proposing an effective design, the authors illustrate how to build the solution using mechanisms available in existing network switches (for example, equal-cost multipath routing, IP anycast, and packet encapsulation). This allows data centers to deploy VL2 with no changes to the underlying switches, substantially lowering the barrier for practical deployment. This paper is a great example of rethinking networking from scratch, while coming full circle to work with today's equipment. Indeed, the work depicted in the VL2 paper has already spawned substantial follow-up work in the networking research community, and likely will for years to come.
©2011 ACM 0001-0782/11/0300 $10.00
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and full citation on the first page. Copyright for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or fee. Request permission to publish from [email protected] or fax (212) 869-0481.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2011 ACM, Inc.