acm-header
Sign In

Communications of the ACM

ACM Opinion

Interpretable Machine Learning


View as: Print Mobile App Share:
Been Kim, staff research scientist at Google Brain.

Credit: Been Kim/The Gradient.

As a staff research scientist at Google Brain, Been Kim focuses on interpretability–helping humans communicate with complex machine-learning models by not only building tools but also studying how humans interact with these systems.

In an interview, Kim discusses her path to AI/interpretability, interpretability and software testing, testing with concept activation vectors (TCAV) and its limitations, acquisition of chess knowledge in AlphaZero, and much more.

From The Gradient
View Full Article


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account