acm-header
Sign In

Communications of the ACM

ACM TechNews

An Algorithm That Hides Your Online Tracks With Random Footsteps


View as: Print Mobile App Share:
random footprints, illustration

Credit: iStockPhoto.com

Steven Smith at the Massachusetts Institute of Technology's Lincoln Laboratory has developed an algorithmic technique to thwart the collection of online users' browsing history and other sensitive Internet habits. The algorithm conceals Web activity by pumping false traffic out of the user's home network. The program threads a few words from an open source dictionary together and searches them, capturing the resulting links in random order and saving them in a database.

In addition, the algorithm follows the Google results, gathering and then following the links that appear on those pages. The table of URLs is capped at about 100,000 to prevent memory overload, and another program, PhantomJS, regularly downloads data from the captured URLs to mimic someone using a Web browser. As PhantomJS roams the Internet, it shifts between different user agents to create a semblance of multiple users browsing. "I'm basically using common sense and intuition," Smith says.

From NextGov.com
View Full Article

 

Abstracts Copyright © 2017 Information Inc., Bethesda, Maryland, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account