acm-header
Sign In

Communications of the ACM

Last Byte

L-Space and Large Language Models


woman writing in a book inside a peephole in dense circle of metropolitan buildings

Credit: Andrij Borys Associates, Shutterstock

It was Sir Terry Pratchett who suggested it first. Not the multiple universes, of course—that idea has been around for ages—but the idea that massive aggregation of data produced uncertainty. Sir Terry called it L-space, the warping of space and time by large numbers of books in the Unseen University's library in his Discworld series. It was a passing fancy, a grace note in a rich and well-constructed fantasy world.

That was, up until late 2022, when the public started to have access to large language models. See, it turns out that data is a bit like a black hole. Black holes aren't made from anything special; they're just a side effect of what happens when you get enormous amounts of matter together in one place and it collapses in on itself. But there must be almost unimaginable amounts of matter in one place to cross over the boundary conditions and start to form a black hole. This isn't just a lot of matter—our own sun is several decimal places too small to even get close to forming a black hole any time soon.

Data is the same. We had been accumulating it for years, in log files, in rows and columns, in relational databases, but it was passive, archival, dead. Deep neural nets did something else with the data, something that pushed us over the boundary conditions. We started to see glimmers of this in the first wave of massive visual models, like DALL-E, which came out in early 2021. The results were impressive, no question. But as time went on, certain users started to see hints of something strange happening in the results.

Like any other reasonably successful technology, these models had users who spent long nights pushing at the limits of the tool. Message boards and chat threads started to discuss repeating visual patterns in the images. @supercomposite, an artist, named a recurring character "Loab": red cheeks, sunken eyes, a tortured expression—a kind of visual creepy pasta to delight in scaring yourself in the early hours of the morning. Giannis Daras, a computer science student at the University of Texas at Austin, noticed that certain nonsense phrases would reliably produce the same visual results—for example, apoploe vesrreaitais generates pictures of birds as if it was in an unknown language. But in the world of images, these seemed of passing importance. Debates soon focused on intellectual property and artists' rights, and these hints of unforeseen complexity were forgotten.

Things came to a head in late 2022 with the release of ChatGPT and similar large language models. For some topics, it performed remarkably well. Ask ChatGPT to come up with a set of principles or guidelines for some domain, and it passed, like a B-average undergrad. It wouldn't do a particularly good job, but it was impressive nonetheless. It was better at fixing code or writing bits of code for you, drawing from enormous libraries of programming questions and answers on the Web. It didn't always get things right, but it was both useful and not so good that programmers started to worry about their jobs.

Things got stranger when you started to ask about individuals. Not celebrities, whose every move had been tracked in gossip blogs and glossy magazines, but the sorts of people that users exploring language models would be disproportionately likely to look up: scientists, researchers, professors. Half of them were narcissistically looking themselves up to see if the model knew about their H-index or had read their latest paper. And this is where things got weird.

The answers were often wrong. Very wrong. But, and this is key, plausibly wrong. They were truthy, as Stephen Colbert said. They had the aura of being truthful but were just… wrong. Answers would be near the truth. Scientists with degrees from MIT and Yale found the model claimed they were alums of Georgia Tech and Princeton. People who had worked at Xerox Parc and Nokia were surprised to see they had a track record at Bell Labs and IBM Research. People with a history of encouraging women in computing were apparently on the board of the Anita Borg Institute. Researchers were listed as having papers and books they had never written—interesting and intriguing publications with plausible titles, published in reputable-sounding journals and conferences, just not ones that existed.

Or at least…not ones that existed in this universe.

It turns out Sir Terry had been right all along: Some things are consistent across universes. In particular, researchers everywhere do what researchers always do, which is publish papers. Some students found that if you carefully timed your queries, you could overwhelm the database and access the underlying papers which had been absorbed into the language model.


Some things are consistent across universes. In particular, researchers everywhere do what researchers always do, which is publish papers.


I started reading through back issues of the Journal of Interaction, which doesn't exist. But in some other place, something very like our universe, it's a major publishing venue. In that universe, the warping of space and time due to the massive accumulation of knowledge is well known, but it had been thought to be of little practical import. I even found the original paper: Kaye, Garabedian, and Lantz, "Gravitational metric distortion by massive data accumulation." Journal of Interaction 22, 9 (2022). It's been cited, some. Not a lot, really. Which is a bit disappointing because in that other universe, I'm one of those authors.

And in that other universe, I'm seeing glimmers, hints, preprints, all starting to suggest that we might be able to communicate between the universes. I'm seeing papers discussing something called α-verse, and Φ-parameters, but I can't figure out what it means from our universe. Wrong frame of reference, you know.

But I think they're reading our articles, leafing through our journals. I just need to get something published. Something to let them know…

Hi. I'm here. I can read your papers. Can you read mine?

Back to Top

Author

Jofish Kaye ([email protected]) directs research teams to produce thoughtful, ethical, and impactful HCI and AI-driven products and prototypes, using tools such as user studies, surveys, big data, and even speculative fiction.


© 2023 Copyright held by Owner/Author.
Request permission to (re)publish from the owner/author

The Digital Library is published by the Association for Computing Machinery. Copyright © 2023 ACM, Inc.


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account
Article Contents: