acm-header
Sign In

Communications of the ACM

ACM News

How Randomness Improves Algorithms


View as: Print Mobile App Share:
When good choices abound, guessing randomly can be surprisingly fruitful.

Computer scientists often find it easier to develop a deterministic algorithm by starting with a randomized version and then “de-randomizing” it.

Credit: Kristina Armitage/Quanta Magazine

Since the very first days of computer science — a field known for its methodical approach to problem-solving — randomness has played an important role. The first program to run on the world's first general-purpose electronic computer used randomness to simulate nuclear processes. Similar approaches have since been used in astrophysics, climate science and economics. In all these cases, plugging in random numbers at certain steps in the algorithm helps researchers account for uncertainty about the many ways that complex processes can play out.

But adding randomness into an algorithm can also help you calculate the correct answer to unambiguous true-or-false questions. "You just say 'OK, let me give up, let me not try, let me just pick something at random,'" said Eric Blais, a computer scientist at the University of Waterloo. "For way too many problems, that ends up being a successful approach."

From Quanta Magazine
View Full Article

 


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account