University of Southern California (USC) researchers have developed a technique for evaluating chatbots' conversational skills, grading their responses on an engagement scale of 0 to 1 based on the concept that open-domain dialogue systems must be genuinely interesting to the user, not just relevant.
Sarik Ghazarian in USC’s Viterbi School of Engineering Information Sciences Institute said understanding this assessment will help improve chatbots and other open-domain dialogue systems.
Said USC Viterbi's Nanyun Peng, "We can use [this work] as a development tool to easily automatically evaluate our systems with low cost. Also, we can explore integrating this evaluation score as feedback into the generation process via reinforcement learning to improve the dialogue system."
From USC Viterbi School of Engineering
View Full Article
Abstracts Copyright © 2020 SmithBucklin, Washington, DC, USA
No entries found