In epistemology, the philosophical study of knowledge, the traditionally honored methods of knowledge acquisition are perception and deduction, and often, induction. They are solid; they are authoritative. We can trust what we see or hear or smell, and we can trust what we deduce, validly, from previous knowledge, and we can trust what we observe over and over again.
But notice that much, perhaps most, of what we know comes from what other people tell us. What is the status of that... that chatter, or quasi-information, or statements of unknown veracity, or whatever it is? We call this mechanism "testimony" (not limited to its legal sense of what is said in court, but any act of telling from one person to another, spoken or written). Insofar as we commonly treat the verb "to know" as factive, meaning that if you know P, then P is true, we can't state flatly that testimony is a source is knowledge, because people tell each other falsehoods all the time. (This piece sets aside the problems of information disorder, which could be viewed as false testimony running at large.)
And now, suddenly we are getting falsehoods and truths delivered by programs, AI chatbots, also in the form of telling somebody something. Can the research on testimony, which grapples with these complications, inform our understanding of chat applications based on Large Language Models? Barest background: The epistemological investigation is generally grounded on a definition of knowledge that amounts to justified true belief. In action, testimony involves a speech act of assertion, A, from a speaker or testifier T to a listener or receiver R. A great deal of material awaits the reader who wants more [Lackey, Green].
Because T can be wrong, confused, or mistaken, or deliberately misleading with regard to the statement P, it would be safe to adhere to the theory that testimony results only in R's belief (rather than knowledge) that P. However, that theory is constantly belied by our standard acceptance of what people tell us. We say, "I know my birthday because my parents told me." We say, "Thanks to that guy on the corner over there, I know how to get to the restaurant." We treat testimony as knowledge without a second thought (except when we do have second or third thoughts, that is, doubts). And note that teaching is testimony, and we would be loath to disavow it as a source of knowledge.
The lively commentary on AI chatbots reflects wild enthusiasm for its apparent discourse skills, along with measured enthusiasm [Hoffman], along with caution, along with perplexity, along with dread. What we really want to know is, simply, when and where and how AI chatbots can help us, a question with which this author struggles. Here, I assume that the raw input comes from gigantic text corpora, and ignore processing methods, commercial arrangements, copyright issues, and so forth. Here, I ask what we need to understand about AI chatbots in terms of epistemology or its artificial parallel, formulating questions to ask by starting with inquiries from the study of testimony.
The basic question is, "Testimony—We use it all the time, but what is its role in knowledge?" Researchers come at this big question from many angles:
So we can now ask, "Is an AI chatbot a source of knowledge? What is its role? Is that role a new one?" We can come at this from several angles, some analogous to those above:
Let's consider question 1.b. Sometimes we use "belief" in knowledge representation to apply to a proposition, maintained in some predicate form, in a database. That won't work here. The AI chatbot does not "keep track" of information about the world. Is there some other way for a computational device to express propositional commitment?
Let's take the last question, about additional content. In the case of the AI chatbot, we can answer this, "Yes!" The response tells us what a lot of people say on this topic. That's useful. Or, at least, it's useful subject to the coherence and diversity of those people. Note that this feature, and others that depend on the training of the LLM, might be diluted by hand-coding or restricted input.
Other questions are left to the reader to contemplate and to, this author hopes, apply to open questions regarding the proper place of AI chatbots. These issues are just a juicy sample of the compelling linguistic and philosophical work on testimony, in which all of the elements in the definitions undergo energetic and disputatious articulation and analysis. And so it should be. This work addresses one of the great questions of human communication: How can we learn so much (or anything at all) from other people, fallible as that channel is? And how do AI chatbots expand the scope of that question?
References
[Floridi] Luciano Floridi. 2011. The Philosophy of Information. Oxford University Press.
[Green] Christopher R. Green. 2023. The Internet Encyclopedia of Philosophy, ISSN 2161-0002.
[Hoffman] Hoffman, Reid. 2023. Amplifying Our Humanity Through AI. John Templeton Foundation (from Greylock).
[Lackey] Lackey, Jennifer and Ernest Sosa (eds.), 2006, The Epistemology of Testimony. Oxford University Press.
Robin K. Hill is a lecturer in the Department of Computer Science and an affiliate of both the Department of Philosophy and Religious Studies and the Wyoming Institute for Humanities Research at the University of Wyoming. She has been a member of ACM since 1978.
No entries found