The news archive provides access to past news stories from Communications of the ACM and other sources by date.
Researchers at Canada's Concordia University found security bugs in 95 of 146 popular Android applications designed for older adults.
Surgeon General Dr. Vivek Murthy urges action to ensure social media environments are healthy and safe.
An overreliance on technology startups could have been a major factor in its demise.
Leaders of Taiwan's semiconductor industry fear the nation's supply of engineers will be unable to meet demand for new talent.
Scientists compared leading quantum computers using the Quantum Computing User Program at the U.S. Department of Energy's Oak Ridge National Laboratory.
Georgia Institute of Technology scientists have built an automatic feeding machine for gorillas at Zoo Atlanta that allows for more natural foraging.
Applied Materials is betting technical talent at nearby universities and the local companies that design chips will spur innovation quickly, making up for cost differences with other locations.
Generative A.I. is already changing how games are made, with Blizzard Entertainment training an image generator on assets from World of Warcraft, Diablo, and Overwatch.
The financial services industry is plotting how to incorporate tools like ChatGPT into its products. But humans will still be necessary to provide personal advice.
Industry representatives argue the ruling creates legal uncertainty for many companies who commonly transfer data across international waters.
World leaders at the Group of Seven summit in Japan called for the development of international standards to limit the potential damage from rapid innovations in artificial intelligence.
Cattle ranchers increasingly have access to virtual fences that allow them to use electronic collars to keep their cows from wandering away.
Higher education institutions are seeing rising enrollment in computer science at the same time interest in humanities is declining.
More than a dozen companies have popped up to offer services aimed at identifying whether photos, text, and videos are made by humans or machines.
Yet another hearing—this one with OpenAI's Sam Altman—has come after a new technology with the possibility to fundamentally alter our lives is already in circulation.
"You can't prevent people from creating nonsense or dangerous information or whatever."
New guidelines aim to measure, and ultimately reduce, the digital impact of the enterprise on the environment.
Quantum computing research has been given a boost at the University of Chicago and Japan's University of Tokyo with a $150-million investment from IBM and Google.
Digital imaging by deepwater seabed mapping company Magellan for U.K. TV production company Atlantic Productions has yielded a "digital twin" of the RMS Titanic.
Researchers have created a large language model trained on Dark Web data.
Google will delete accounts after two years of inactivity, and experts expect more data deletion policies to come.
The Court ruled the families of terrorism victims had not shown the companies "aided and abetted" attacks on their loved ones.
Drugmakers worldwide are adopting artificial intelligence in the hope of accelerating drug discovery and time to market while cutting costs.
The FatNet algorithm can convert any convolutional neural network into a specialized network that is more compatible with an optical artificial intelligence accelerator.
A next-generation in-silico statistical simulator can support a benchmarking tool for medical and biological researchers to assess and validate computational methods.
National University of Singapore scientists created a three-dimensional light-field sensor that can reconstruct scenes with ultra-high angular resolution.
Researchers turned to the depths of the dark web to train this new language model
Technologists warn about the dangers of the so-called singularity. But can anything actually be done to prevent it?
At a congressional hearing, senators from both parties and OpenAI CEO Sam Altman said a new federal agency was needed to protect people from AI gone bad.
A provocative paper from researchers at Microsoft claims A.I. technology shows the ability to understand the way people do. Critics say those scientists are kidding themselves.