acm-header
Sign In

Communications of the ACM

Practice

Cache Me If You Can


Cache Me If You Can, illustrative photo

Credit: Optimarc

back to top 

Years ago I squandered most of my summer break locked inside my apartment, tackling an obscure problem in network theory—the bidirectional channel capacity problem. I was convinced that I was close to a breakthrough. (I wasn't.) Papers were everywhere, intermingled with the remnants of far too many 10¢ Taco Tuesday wrappers.

A good friend stopped by to bring better food, lend a mathematical hand, and put an end to my solitary madness. She listened carefully while I jumped across the room grabbing papers and incoherently babbling about my "breakthrough."

Then she somberly grabbed a pen and wrote out the flaw that I had missed, obliterating the proof.

I was stunned and heartbroken. She looked up and said, "But this is great, because what you've done here is define the problem more concretely." She continued with a simple truth that I have carried with me ever since:

"Most times, defining the problem is harder and more valuable than finding the answer. The world needs hard, well-defined problems. Generally, only one or two people work together on an answer—but hundreds work on a well-defined problem."

And so, dear reader, this is where I would like to begin. This article looks at the problems inherent in building a more decentralized Internet. Audacious? Yes, but this has become a renewed focus in recent years, even by the father of the Web himself (see Tim Berners-Lee's Solid project4). Several companies and open source projects are now focusing on different aspects of the "content-delivery" problem. Our company Edgemesh (https://edgemesh.com) is working on peer-enhanced client-side content acceleration, alongside other next-generation content-delivery networks such as Peer5 (https://peer5.com) and Streamroot (https://streamroot.io), both of which are focused on video delivery. Others, such as the open source InterPlanetary File System (IPFS; https://ipfs.io) project are looking at completely new ways of defining and distributing and defining "the Web."

Indeed, the concept of a better Internet has crept into popular media. In season 4 of HBO's sitcom Silicon Valley, the protagonist Richard Hendricks devises a new way to distribute content across the Internet in a completely distributed manner using a P2P protocol. "If we could do it, we could build a completely decentralized version of our current Internet," Hendricks says, "with no firewalls, no tolls, no government regulation, no spying. Information would be totally free in every sense of the word." The story line revolves around the idea that thousands of users would allocate a small portion of their available storage on their mobile devices, and that the Pied Piper software would assemble the storage across these devices in a distributed storage "cloud." Then, of course, phones explode and hilarity ensues.

The core idea of a distributed Internet does make sense, but how would it be built? As I learned in my self-imposed solitary confinement so long ago, before diving into possible solutions, you need to define the problems more clearly.

Back to Top

Problems of a Distributed Internet

In 2008, Tom Leighton, cofounder of Akamai, the largest content-distribution network in the world, outlined four major architectures for content distribution: centralized hosting, "big datacenter" content-delivery networks (CDNs), highly distributed CDNs, and P2P networks. Of these, Leighton noted that:

"P2P can be thought of as taking the distributed architecture to its logical extreme, theoretically providing nearly infinite scalability. Moreover, P2P offers attractive economics under current network pricing structures."11

He then noted what others have found in the past, that although the P2P design is theoretically the most scalable, there are several practical issues, specifically throughput, availability, and capacity.

Throughput. The most commonly noted issue is the limited uplink capacity of edge devices, as noted by Leighton in his 2008 article:

P2P faces some serious limitations; most notably because the total download capacity of a P2P network is throttled by its total uplink capacity. Unfortunately, for consumer broadband connections, uplink speeds tend to be much lower than downlink speeds: Comcast's standard high-speed Internet package, for example, offers 6Mbps for download but only 384Kbps for upload (1/16th of download throughput).11

This limitation is not as acute today as it was nearly a decade ago when upload speeds in the U.S. hovered around .5Mbps. Figure 1 shows the current upload and download as taken from Speedtest.net (http://www.speedtest.net/reports/). These data points show that global "last-mile" throughput rates are nearly 30 times their 2008 counterparts. Is this enough? Would a peer with an upload rate at the lower quartile of these metrics (∼4Mbps) suffice? This question has been thoroughly explored in regard to actual webpage load time.

f1.jpg
Figure 1. Bandwidth rates (upload/download, broadband/mobile).

When Mike Belshe, of Google SPDY fame, looked at the relationship between end-client bandwidth and page-load time, he discovered that "bandwidth doesn't matter (much)."3 Once the client's available bandwidth reached 5Mbps, the impact on the end user's page load is negligible. Figure 2 shows page-load time as a function of client bandwidth, assuming a fixed RTT (round-trip time) of 60ms.

f2.jpg
Figure 2. Page load time as a function of client bandwidth.

Availability. The next major hurdle for the distributed Internet is peer availability. Namely, are there enough peers, and are they online and available for enough time to provide value? In the past 10 years the edge device count has certainly increased, but has it increased enough? Looking at "Internet Trends 2017,"14 compiled by Mary Meeker of the venture capital firm KPCB, you can see how much the "available peer" count has increased over the past ten years from mobile alone (see Figure 3).

f3.jpg
Figure 3. Mary Meeker's 2017 smartphone analytics.

Today, approximately 49% of the world's population is connected10—around 3.7 billion people, many with multiple devices—so that's a big pool of edge devices. Peter Levine of the venture capital firm Andressen Horowitz has taken us out a few years further and predicted we will shortly be going beyond billions and heading toward trillions of devices.12

You can get a sense of scale by looking at an Edgemesh network for a single e-commerce customer's website with a global client base.

It's probably safe to say there are enough devices online, but does the average user stay online long enough to be available? What is "long enough" for a peer to be useful?

A sensible place to start might be to want peers to be online long enough for any peer to reach any other peer anywhere on the globe. Given that, we can set some bounds.

The circumference of the Earth is approximately 40,000km. The rule of thumb is that light takes 4.9 micro-seconds to move 1km through fiber optics. That would mean data could circumnavigate the globe in about 1/5th of a second (196 milliseconds). Oh, if wishing only made it so, but as Stanford Unversity's Stuart Cheshire points out in "It's the Latency, Stupid,"6 the Internet operates at least a factor of two slower than this. This 2x slowdown would mean it would take approximately 400 milliseconds to get around the globe. Unfortunately, I have spent some time in telecom—specifically in latency-optimized businesses13—and I think this time needs to be doubled again to account for global transit routing; thus, the data can go around the world in some 800 milliseconds. If users are online and available for sub-800 millisecond intervals, this may become problematic. Since most decentralized solutions would require the user to visit the content (for example, be on the website), the real question is, what is the average page-view time for users across the globe?

Turns out it is 3 minutes 36 seconds,24 or 216,000 milliseconds.

To double-check this, I took all peer-session times (the amount of time Edgemesh peers were online and connected to the mesh) across the Edgemesh user base for the past six months (Figure 4). The average was right in line at 3 minutes 47 seconds.

f4.jpg
Figure 4. Histogram of peer duration.

In either case, if the node stays online just long enough to download a single webpage, that would be enough time for the data to circum-navigate the globe 270 times, certainly long enough to contact a peer anywhere on Earth.

Capacity. If enough users are online for a long enough duration, and they have an acceptable egress throughput (upload bandwidth), all that remains is the question of whether there is enough spare capacity (disk space) available to provide a functional network.

If we assume a site has 20% of its users on mobile and 80% of its users on desktops—and further expand this to 500MB of available capacity per desktop user and 50MB per mobile user (the lower end of browser-available storage pools)—we can extract an estimated required mesh size to achieve a given cache hit rate if the requested content follows a Zipf distribution.1 Essentially, a website with 500GB of static content, that is, about 16 million average Web images, would need an online capacity of two million distinct nodes to achieve a theoretical offload of 100% of its traffic to a P2P mesh (approximately an 8:1 ratio of images to users).

Back to Top

Enabling a Distributed Internet

Now that we have better defined the problems and established the theoretical feasibility of a new solution, it's time to look at the technology available to bring to bear on the problem. To start, we can constrain our focus a bit. Implementations such as IPFS focus on distributing the entire content base, allowing you to free yourself from the restrictions of Web servers and DNS entirely. This is a fantastic wholesale change, but the tradeoff is that it will require users to dramatically modify how they access content.

Since a P2P design is dependent on the total network size, this model has difficulty growing until it reaches critical mass. At Edgemesh, we wanted to focus on enhancing existing Web-content delivery transparently (for example, in the browser) without requiring any changes to the user experience. This means ensuring that the technology abides by the following three restrictions:

  • For users, the solution should be transparent.
  • For developers, the solution should require zero infrastructure changes.
  • For operations, the solution should be self-managing.

The next question is where exactly to focus.

Fully enabling peer-enhanced delivery of all content is difficult and dangerous, especially allowing for peer-delivered JavaScript to be executed. Is there an 80% solution? Trends posted by HTTP Archive reveal that static components (images/video/fonts/CSS) make up roughly 81% of the total page weight.9

Given these details, let's narrow the focus to enabling/enhancing edge delivery of these more traditional CDN assets and the associated challenges of moving and storing data.

Moving data: Building a new network (overlay). To support P2P distribution, an overlay network needs to be developed to allow the P2P connections to operate within the larger existing Internet infrastructure. Luckily, such a stack is available: WebRTC (Real-Time Communications19). Started in earnest in 2011 by Google, WebRTC is an in-browser networking stack that enables peer-to-peer communication. WebRTC is primarily employed by voice and video applications (Google Hangouts/Dua/Allo, Slack, Snapchat, Amazon Chime, WhatsApp, Facebook Messenger) to facilitate peer-to-peer video- and audioconferencing.

WebRTC is a big deal; in June 2016 Google provided several key milestones7 from stats it collected, with some additional updates six months later:25

  • Two billion Chrome browsers with WebRTC.
  • One billion WebRTC audio/video minutes per week on Chrome.
  • One petabyte of DataChannel traffic per week on Chrome (0.1% of all Web traffic).
  • 1,200 WebRTC-based companies and projects (it was 950 in June).
  • Five billion mobile app downloads that include WebRTC.

WebRTC support exists in the major browsers—Chrome, Firefox, Edge, and now Safari,2 Comparing WebRTC's five-year adoption against other VoIP-style protocols shows the scale.8

WebRTC is a user-space networking stack. Unlike HTTP, which is dependent on TCP for transfer, WebRTC has its roots in a much older protocol—Stream Control Transmission Protocol (SCTP)—and encapsulates this in User Datagram Protocol (UDP). This allows for much lower latency transfer, removes head-of-line blocking, and, as a separate network stack, allows WebRTC to use significantly more bandwidth than HTTP alone.

SCTP is a little like the third wheel of the transport layer of the Open Systems Interconnection (OSI) model—we often forget it's there but it has some very powerful features. Originally introduced to support signaling in IP networks,22 SCTP quickly found adoption in next-generation networks (IMS and LTE).

WebRTC leverages SCTP to provide a reliable, message-oriented delivery transport (encapsulated in UDP or TCP, depending on the implementation5). Alongside SCTP, WebRTC leverages two additional major protocols: Datagram Transport Layer Security (DTLS) for security (a derivative of SSL) and Interactive Connectivity Establishment (ICE) to allow for support in network address translation (NAT) environments, for example, firewall traversal.

The details of the ICE protocol and how it works with signaling servers—for example, STUN and TURN—are beyond the scope of this article, but suffice it to say that WebRTC has all the necessary plumbing to enable real peer-to-peer networking.

A simple example is a WebRTC Golang implementation by Serene Han.21 Han's chat demo allows you to pass the SDP (Session Description Protocol) details between two machines (copy paste signaling) to enable P2P chat. To run this yourself (assuming you have a Docker instance locally), simply do the following:

docker run -it golang bash

Then in the Docker instance, this one-liner will get you set up:

apt-get update && apt-get install libx11-dev -y && \ go get github.com/keroserene/go-webrtc && \ cd /go/src/github.com/kerose-rene/go-webrtc && \ go run demo/chat/chat.go

If you prefer a browser-native starting point, look at simple-peer module,20 originally from Feross Aboukhadijeh's work with WebTorrent (https://webtorrent.io).

Storing data: Browser storage options and asset interception. The next step is finding a method both to intercept standard HTTP requests and to develop a system for storing peer-to-peer delivered assets. For the request-intercept problem, you have to look no further than the service worker.18 The service worker is a new feature available in most browsers that allows for a background process to run in the browser. Like a Web worker, which can be used as a proxy for threads, a service worker has restrictions on how it can interact and exchange data with the Document Object Model (DOM).

The service worker does, however, have a powerful feature that was originally developed to support offline page loads: the Fetch API.16 The Fetch API allows a service worker to intercept request and response calls, similar to an HTTP proxy.

With the service worker online, you can now intercept traditional HTTP requests and offload these requests to the P2P network. The last remaining component will be a browser local storage model where P2P-accelerated content can be stored and distributed. Although no fewer than five different storage options exist in the browser, the IndexedDB17 implementation is the only storage API available within a service-worker context and the DOM context, where the WebRTC code can execute, which is why Edgemesh chose it as the storage base. Alternatively, the CacheStorage API may also be used within the service-worker context.15

Back to Top

Implementing a Distributed Internet

We have a theoretically viable model to support peer-to-peer content delivery. We have a functional network stack to enable ad-hoc efficient peer-to-peer transfer and access to an in-browser storage medium. And so, the game is afoot!

Figure 5 is a flowchart of the Edgemesh P2P-accelerated content-delivery system. The figure shows where the service-worker framework will enable asset interception, and WebRTC (aided by a signal server) will facilitate browser-to-browser asset replication.

f5.jpg
Figure 5. Flow diagram for WebRTC Enhanced asset delivery.

Returning to Mike Belshe's research, we can start to dig into some of the key areas to be optimized. Unlike bandwidth, where adding incrementally more bandwidth above 5Mbps has negligible impact on page-load time, latency (RTT) dramatically increases page-load time.3

WebRTC is already an efficient protocol, but the peer-selection process presents opportunities for further latency reduction. For example, if you are located in New York, providing a peer in Tokyo is likely a nonoptimal choice.

A simple optimization might be to prefer peers that reside in the same network, perhaps identified by the autonomous system (AS)23 number of each peer. Even this simple optimization can cut the average latency by a factor of two. Figure 6 shows performance increase by intra-AS routing preference.

f6.jpg
Figure 6. Performance increased by routing preference.

Another optimization is choosing which assets to replicate into a peer. For example, if a user is currently browsing the landing page of a site, we can essentially precache all the images for the next pages, effectively eliminating the latency altogether. This is a current area of research for the team at Edgemesh, but early solutions have already shown significant promise. Figure 7 shows the effective render time for Edgemesh-enabled clients (accelerated) and non-Edgemesh enabled clients (standard) for a single customer domain. The average page-load time has been reduced by almost a factor of two.

f7.jpg
Figure 7. Effect of P2P enhanced background caching on load time.

This is most clearly seen when most of the page content can be effectively precached, as shown in the page-load time statistics of Figure 8.

f8.jpg
Figure 8. Page load time components: Accelerated vs. standard.

Back to Top

Conclusion

It had been a few days since I had been outside, and a few more since I would make myself presentable. For the previous few weeks the team and I had been locked inside the office around the clock, essentially rewriting the software from scratch. We thought it would take a week, but we were now three months in. The growing pile of empty delivery bags resting atop the ad-hoc whiteboard tables we were using was making it difficult to make out exactly what the big change was. We were convinced this was the breakthrough we had been looking for (turns out it was), and that this version would be the one that cracked the problem wide open. I was head down trying to get my parts to work, and I was lost. Then I heard the knock at the door. She came in and sat down, patiently moving aside the empty pizza boxes while I babbled on about our big breakthrough and how I was stuck.

Then, just like she had nearly two decades earlier, she grabbed the marker and said:

"Honey, I think I see the issue. You haven't properly defined the problem. Most times, defining the problem is harder and more valuable than finding the answer. So, what exactly are you trying to solve?"

The world is more connected than it ever has been before, and with our pocket supercomputers and Internet of Things' future, the next generation of the Web might just be delivered in a peer-to-peer model. It's a giant problem space, but the necessary tools and technology are here today. We just need to define the problem a little better.

Back to Top

References

1. Adamic, L.A., Huberman, B.A. Zipf's law and the Internet. Giottometrics 3 (2002), 143–150; http://www.hpl.hp.com/research/idl/papers/ranking/adamicglottometrics.pdf.

2. Apple Inc. Safari 11.0; https://developer.apple.com/library/content/releasenotes/General/WhatsNewInSafari/Safari_11_0/Safari_11_0.html.

3. Belshe, M. More bandwidth doesn't matter (much), 2010; https://docs.google.com/a/chromium.org/viewer?a=v&pid=sites&srcid=Y2hyb21pdW0ub3JnfGRldnxneDoxMzcyOWI1N2I4YzI3NzE2.

4. Berners-Lee, T. Solid; https://solid.mit.edu/.

5. BlogGeek.Me. Why was SCTP selected for WebRTC's data channel, 2014; https://bloggeek.me/sctp-data-channel/.

6. Cheshire, S. It's the latency, stupid, 1996–2001; http://www.stuartcheshire.org/rants/latency.html.

7. Google Groups. WebRTC, 2016; https://groups.google.com/forum/#!topic/discuss-Webrtc/I0GqzwfKJfQ.

8. Hart, C. WebRTC: One of 2016's biggest technologies no one has heard of. WebRTC World, 2017; http://www.webrtcworld.com/topics/webrtc-world/articles/428444-webrtc-one-2016s-biggest-technologies-no-one-has.htm.

9. HTTP Archive; http://httparchive.org/trends.php.

10. Internet World Stats. World Internet usage and population statistics—March 31, 2017; http://www.internetworldstats.com/stats.htm.

11. Leighton, T. Improving performance on the Internet. acmqueue 6, 6 (2003); http://queue.acm.org/detail.cfm?id=1466449.

12. Levine, P. The end of cloud computing. Andreessen Horowitz; http://a16z.com/2016/12/16/the-end-of-cloud-computing/.

13. Loveless, J. Barbarians at the gateways. acmqueue 11, 8 (2013); http://queue.acm.org/detail.cfm?id=2536492.

14. Meeker, M. Internet trends 2017—code conference. KPCB; http://www.kpcb.com/internet-trends.

15. Mozilla Developer Network. CacheStorage, 2017; https://developer.mozilla.org/en-US/docs/Web/API/CacheStorage.

16. Mozilla Developer Network. FetchAPI, 2017; https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API.

17. Mozilla Developer Network. IndexedDB API, 2017; https://developer.mozilla.org/en-US/docs/Web/API/IndexedDB_API.

18. Mozilla Developer Network. Using service workers, 2017; https://developer.mozilla.org/en-US/docs/Web/API/Service_Worker_API/Using_Service_Workers.

19. Real-time communication in Web browsers (rtcweb) Charter for Working Group; http://datatracker.ietf.org/wg/rtcweb/charter/.

20. Simple WebRTC video/voice and data channels. Github; https://github.com/edgemesh/simple-peer.

21. WebRTC for Go. Github; https://github.com/keroserene/go-webrtc.

22. Wikipedia. Signalling System No. 7; https://en.wikipedia.org/wiki/Signalling_System_No._7.

23. Wikipedia. Autonomous system; https://en.wikipedia.org/wiki/Autonomous_system_(Internet).

24. Wolfgang Digital. E-commerce KPI benchmarks, 2016; https://www.wolfgangdigital.com/uploads/general/KPI_Infopgrahic_2016.jpg.

25. YouTube. WebRTC, 2016; https://youtu.be/OUfYFMGtPQ0?t=16504.

Back to Top

Author

Jacob Loveless is chief executive officer at Edgemesh Corporation, the edge-network acceleration platform. Previously, he served as CEO of Lucera Financial Infrastructures and was a partner at Cantor Fitzgerald, responsible for running the firm's global low-latency trading operations.


Copyright held by owner/author. Publication rights licensed to ACM.
Request permission to publish from [email protected]

The Digital Library is published by the Association for Computing Machinery. Copyright © 2018 ACM, Inc.