One of the dramatic trends at the intersection of computing and healthcare has been patients' increased access to medical information, ranging from self-tracked physiological data to genetic data, tests, and scans. Increasingly however, patients and clinicians have access to advanced machine learning-based tools for diagnosis, prediction, and recommendation based on large amounts of data, some of it patient-generated. Consequently, just as organizations have had to deal with a "Bring Your Own Device" (BYOD) reality5 in which employees use their personal devices (phones and tablets) for some aspects of their work, a similar reality of "Bring Your Own Algorithm" (BYOA) is emerging in healthcare with its own challenges and support demands. BYOA is changing patient-clinician interactions and the technologies, skills and workflows related to them.
In this Viewpoint, we argue that BYOA is changing the patient-clinician relationship and the nature of expert work in healthcare, and better patient-clinician-information-interpretation relationships can be facilitated with solutions that integrate technological and organizational perspectives.
Situations in which patients have direct access to algorithmic advice are becoming commonplace.4 However, many new tools are based on entirely new "black-box" AI-based technologies, whose inner workings are likely not fully understood by patients or clinicians. For example, most patients with Type 1 diabetes now use continuous glucose monitors and insulin pumps to tightly manage their disease. Their clinicians carefully review the data streams from both devices to recommend dosage adjustments. Recently, however, new automated recommender systems to monitor and analyze food intake, insulin doses, physical activity, and other factors influencing glucose levels, and provide data-intensive, AI-based recommendations on how to titrate the regimen, are in different stages of FDA approval (for example, DreaMed, Tidepool Loop), using "black box" technology—an alluring proposition for a clinical scenario that requires identification of meaningful patterns in complex and voluminous data.
But how these AI-based insights are consumed by the patient and clinician is uncharted territory, with scant population-level evidence to guide their use. Just as Bring Your Own Device can lead to incompatibility between institutional infrastructure and personal tools, with Bring Your Own Algorithm in healthcare, patients and clinicians confront cases where the AI-based advice patients obtain on their own is incompatible with best practice clinical guidelines, the clinician's judgment, or in some cases, with prior models or algorithms used for similar medical cases.2 Navigating the conflicting recommendations from population-level guidelines and individualized, algorithmic recommendations generated through a combination of advanced medical testing, patient-generated data, and AI-based systems is a challenge for which both clinicians and patients are unprepared.
The potential for unproductive contestability,7 where the clinician challenges the machine recommendations that are available to the patient, is concerning because the patient's involvement may transform potentially productive differences in perspective (for example, clinicians thinking more deeply due to algorithmic advice that differs from their intuition) into personalized conflict that threatens the perceived expertise of the clinician and patient-clinician trust, and may generate uncertainty or worry for the patient. Yet contestability is likely because the machine learning models are fallible and sensitive to bias in training, and patients often lack the broader medical context within which to evaluate the algorithmic advice. As a result, the emerging BYOA reality alters clinicians' role, emphasizing their ability to effectively interact with patients and curate, reconcile and communicate alternative interpretations of the information and recommendation made by algorithmic advice tools.
To complement the development of patient and clinician-facing explainable systems, new occupations may be needed to serve as curators and communication bridges between patients, medical information, and clinicians.
While a wealth of information can help educate patients about their health and medical options, patients often lack the more abstract overarching background that is needed to efficiently interpret the medical information now available to them, leading to misunderstandings or errors that clinicians must correct or reconcile. Troublingly, new tools and misguided interpretation of data can erode patients' trust in clinicians and the medical advice they provide when the AI-based tools offer alternative or conflicting diagnoses, advice, or courses of treatment.
As BYOA profoundly alters patient-clinician-information-interpretation relationships, new thinking is required to best harness computing in a clinical interaction context. We see three complementary approaches to potential solutions, bringing together new computing-based tools and organizational practices, as described here.
The use of "black-box" tools for diagnoses and recommendation by patients and clinicians begets two undesired outcomes. First, such tools are often not trusted by their clinician users because they do not understand why the tool reached certain diagnoses or recommendations. Clinician distrust may be especially likely in the BYOA situation where the algorithms patients access are unfamiliar to clinicians. Second, increasing patients' direct access to such tools can jeopardize patients' trust in clinicians' judgment and advice.11 One way to alleviate these concerns involves the use of explainable systems,1 focusing on both user types (patients and clinicians). Much of the research on explainability and interpretability of black-box systems has included visualization of neural networks, analyzing machine learning systems, and training easily interpretable systems to approximate black-box systems. The intended audiences for these approaches are often computer scientists. More work is needed on how explanations should be provided to clinicians (users who do not understand the technology but are experts in the application domain) and patients (users lacking knowledge of technology and application domain).
One potential way to make explainable systems more useful is with natural language-based explanation user interfaces, via embodied and non-embodied conversational agents. In previous research,3 we found there are many complex and interacting human factors that affect non-expert user confidence in a system, including perceptions of the understandability of the explanation, its adequacy, and how intelligent and friendly the system is. The importance of these factors likely differ based on user level of domain expertise, suggesting that different explanations would be effective for patients and physicians. We need to further investigate the effects of different explanatory styles on patients and physicians in BYOA contexts in addition to improving techniques for making black-box algorithms more explainable and interpretable.
To align the information patients and clinicians are exposed to while considering the vast differences in their expertise and formal education, new tools should be developed providing patients a simplified version of the explainable systems clinicians use, as well as tools and features that can help users determine the reliability of the algorithms used. Such new tools and features will help enhance patients' and clinicians' trust in the algorithms and understanding of their limitations, mitigate potentially unproductive contestability, and help establish a common ground for patient-clinician interaction and enhanced patient trust in clinicians.
To complement the development of patient and clinician-facing explainable systems, new occupations may be needed to serve as curators and communication bridges between patients, medical information, and clinicians. Just as new technologies in the past often led to the emergence of new occupational categories and the elimination of others,6 BYOA may demand new work functions whose training and day-to-day operation will integrate medical knowledge, basic understanding of machine learning, communication skills and information and curation savvy. These new healthcare team members will be trained to engage with patients around shared BYOA and explainable systems in ways that are empowering to patients without threatening clinicians. Their inclusion in a patient-focused healthcare environment will be a boon to overburdened and increasingly burned-out clinicians10 who struggle to cope with growing demands on their time.
A complementary approach treats increased patient interaction with self-diagnosis and advice tools as an opportunity to engage patients in designing future tools. BYOA systems can be a clinical healthcare goal rather than an unplanned outcome of consumer product availability, making the interaction between patients, clinicians, information, and interpretation better managed and more effective. Just as companies benefit from the insights of lead users8 who bring important user perspective and novel ideas to the design of tools companies develop, BYOA tools could benefit from patient-clinician design collaborations, in which the needs, expectations, and knowledge gaps of patients will come in close contact with the clinicians, designers, and medical informaticists who develop better—and better understood—future tools. In the spirit of user-in-the-loop patient-centered co-design,9 patient-clinician-designer co-design of algorithmic advice tools would focus on the design of a customizable tool whose advice content properties and presentation are adjustable to different personas and user preferences, and levels of computer and visualization literacy. Following such co-design, the adjustment of algorithmic advice tools could ultimately be made by the clinician, the patient, or in consultation between them. Such patient-in-the-loop design processes, in which patients and clinicians interact around developing BYOA prototypes, could help mitigate misguided or wrong patient self-diagnosis and data interpretation, and the stress and anxiety they can provoke.
Computing has a rich history of transforming healthcare: from medical imaging to electronic health records to expert systems, computing has been facilitating major shifts in healthcare practices and tools of the trade. With data-intensive and AI-based computing tools increasingly made available directly to patients, computing is once again transforming healthcare, but this time transforming the medical expert profession and the relationship between patients and their healthcare providers. This transformation poses a number of challenges to clinicians that require new thinking about the emerging patient-clinician-information-interpretation relationships. In this Viewpoint we outline some of the key characteristics of this transformation, and possible ways to address the challenges. We acknowledge that potential solutions may require the development of new tools and roles, which may lead to new challenges, such as the need to integrate new tools into clinicians' workflow. We therefore emphasize the need for a combination of technological and organizational perspectives in scoping and developing such tools and workflows, to ensure any solution will conform to the Hippocratic Oath principle of "first, do no harm."
1. Abdul, A. et al. Trends and trajectories for explainable, accountable and intelligible systems: An HCI research agenda. Proceedings of the 2018 ACM CHI Conference on Human Factors in Computing Systems (2018).
2. Bansal, G. et al. Updates in human-ai teams: Understanding and addressing the performance/compatibility trade-off. In Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 2019), 2429–2437.
3. Ehsan, U. et al. Rationalization: A neural machine translation approach to generating natural language explanations. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society. (2018), ACM, 81–87.
4. Fraser, H., Coiera, E., and Wong, D. Safety of patient-facing digital symptom checkers. The Lancet 392, 10161 (2018), 2263–2264.
5. French, A., Guo, C., and Shim, J. Current status, issues, and future of bring your own device (BYOD). Communications of the Association for Information Systems 35, 1 (2014), 10.
6. Frey, C. and Osborne, M. The future of employment: How susceptible are jobs to computerization? Technological Forecasting and Social Change 114, (2017), 254–280.
7. Hirsch, T. et al. Designing contestability: Interaction design, machine learning, and mental health. In Proceedings of the 2017 Conference on Designing Interactive Systems (2017), ACM, 95–99.
8. Lilien, G.L. et al. Performance assessment of the lead user idea-generation process for new product development. Management Science 48, 8 (2002), 1042–1059.
9. Luo, Y., Liu, P. and Choe, E.K. Co-designing food trackers with dietitians: Identifying design opportunities for food tracker customization. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (2019), 1–13.
10. Schwenk T.L. and Gold K.J. Physician burnout—A serious symptom, but of what? JAMA 320, 11 (2018), 1109–1110; doi:10.1001/jama.2018.11703.
11. Vayena, E., Blasimme, A., and Cohen, I.G. Machine learning in medicine: Addressing ethical challenges. PLoS Med 15, 11 (2018); e1002689.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2021 ACM, Inc.
No entries found