In neuroscience, interpretability often implies an alignment to brain constructs. Conversely, in artificial intelligence (AI), the emphasis is on making the models' decision-making process more transparent and explicable to a human interpreter.
Attempts to make artificial neural networks (ANNs) more interpretable to neuroscientists should not be conflated with ongoing efforts in explainable AI. However, both AI researchers and neuroscientists can leverage the synergy between neuroscience and AI in working towards interpretable ANN models.
From Nature Machine Intelligence
View Full Article
No entries found