MIT researchers are striving to improve the interpretability of features of machine learning models so that decision makers will be more comfortable using the outputs of those models. Drawing on years of field work, they developed a taxonomy to help developers craft features that will be easier for their target audience to understand.
"We found that out in the real world, even though we were using state-of-the-art ways of explaining machine-learning models, there is still a lot of confusion stemming from the features, not from the model itself," says Alexandra Zytek, an electrical engineering and computer science Ph.D. student and lead author of a paper introducing the taxonomy.
To build the taxonomy, the researchers defined properties that make features interpretable for five types of users. They also offer instructions for how model creators can transform features into formats that will be easier for a layperson to comprehend.
The researchers hope their work will inspire model builders to consider using interpretable features from the beginning of the development process.
From MIT News
View Full Article
No entries found