acm-header
Sign In

Communications of the ACM

ACM News

The Text File that Runs the Internet


View as: Print Mobile App Share:
There are several breeds of Internet robot.

There’s now so much money in AI, and the technological state of the art is changing so fast that many site owners can’t keep up.

Credit: Erik Carter

For three decades, a tiny text file has kept the Internet from chaos. This text file has no particular legal or technical authority, and it's not even particularly complicated. It represents a handshake deal between some of the earliest pioneers of the Internet to respect each other's wishes and build the internet in a way that benefitted everybody. It's a mini constitution for the Internet, written in code. 

It's called robots.txt and is usually located at yourwebsite.com/robots.txt. That file allows anyone who runs a website — big or small, cooking blog or multinational corporation — to tell the Web who's allowed in and who isn't. Which search engines can index your site? What archival projects can grab a version of your page and save it? Can competitors keep tabs on your pages for their own files? You get to decide and declare that to the Web.

It's not a perfect system, but it works. Used to, anyway. For decades, the main focus of robots.txt was on search engines; you'd let them scrape your site and in exchange they'd promise to send people back to you. Now AI has changed the equation: companies around the Web are using your site and its data to build massive sets of training data, in order to build models and products that may not acknowledge your existence at all.

From The Verge
View Full Article

 


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account