Thursday, 18th of January 2024, 12:00 – 1:00

Automatic hint generation

Venue: 
SR1

Lecturer:
Jamshid Mozafari - DS

Abstract: 

Nowadays, individuals tend to engage in dialogues with Large Language Models (LLMs), seeking answers to their questions. In times when such answers are readily accessible to anyone, the stimulation and preservation of human's cognitive abilities as well as the assurance of maintaining good reasoning skills by humans is crucial. This talk addresses such need by proposing hints (rather than answers or before giving answers) as a viable solution. We introduce a framework for the automatic hint generation task, employing it to construct a novel large-scale dataset, featuring approximately 160,000 hints corresponding to 16,000 questions. Additionally, we present two quality evaluation methods that measure the Convergence and Familiarity attributes of hints. To assess the quality of our dataset and proposed evaluation methods, we employed 10 individuals who annotated 3000 hints. The effectiveness of hints varied, with success rates of 81%, 47%, and 31% for easy, medium, and hard questions, respectively. Furthermore, our evaluation methods exhibit a robust correlation with annotators' results. Conclusively, our findings highlight three key insights: the facilitative role of hints in resolving unknown questions, the dependence of hint quality on question difficulty and the feasibility of employing automatic evaluation methods for hint assessment.

 

 

Nach oben scrollen