acm-header
Sign In

Communications of the ACM

ACM Opinion

Interpretability of Artificial Neural Network Models in AI versus Neuroscience


View as: Print Mobile App Share:
Illustration of human brain with neurons and synapses lit up like a digital network.

Researchers should conceptually separate the objectives of AI and neuroscience while interpreting the parameters and operations of current computational models.

Credit: Getty Images

In neuroscience, interpretability often implies an alignment to brain constructs. Conversely, in artificial intelligence (AI), the emphasis is on making the models' decision-making process more transparent and explicable to a human interpreter.

Attempts to make artificial neural networks (ANNs) more interpretable to neuroscientists should not be conflated with ongoing efforts in explainable AI. However, both AI researchers and neuroscientists can leverage the synergy between neuroscience and AI in working towards interpretable ANN models.

From Nature Machine Intelligence
View Full Article


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account