The evolution of artificial intelligence and related technologies have the potential to drastically increase the clinical importance of automated diagnosis tools. Putting these tools into use, however, is challenging, since the algorithm outcome will be used to make clinical decisions and wrong predictions can prevent the most appropriate treatment from being provided to the patient. Models should not only provide accurate predictions, but also evidence that supports the outcomes, so they can be audited, and their predictions double-checked. Some models are constructed in such a way they are difficult to interpret, hence the name black-box models. While there are methods that generate explanations for generic black-box classifiers,9 the solutions are usually not tailored for the needs of physicians and do not take any medical background into consideration. Our claim, in this work, is that explanations must be based on features that are meaningful to physicians. We call those contextual features.
In order to improve accuracy and transparency in automatic ECG analysis, we propose generating explanations based on contextual features for ECG diagnosis.
Deep neural networks are relevant examples of black-box models. These models, trained on large real datasets, have demonstrated the ability to provide extremely accurate diagnosis.1,5 However, these large and complex models of stacked transformations usually do not allow easy interpretation of the results. Despite their potential to transform healthcare and clinical practice,3,8 there are still significant challenges that must be addressed. For instance, it is commonplace that neural network results are brittle either because it learns to solve the task in unwanted ways or because even small perturbations may have a huge impact on its outcome.2
No entries found