QuestEval: Summarization Asks for Fact-based Evaluation - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2021

QuestEval: Summarization Asks for Fact-based Evaluation

Thomas Scialom
  • Fonction : Auteur
Paul-Alexis Dray
  • Fonction : Auteur
Sylvain Lamprier
Jacopo Staiano
  • Fonction : Auteur
Alex Wang
  • Fonction : Auteur

Résumé

Summarization evaluation remains an open research problem: current metrics such as ROUGE are known to be limited and to correlate poorly with human judgments. To alleviate this issue, recent work has proposed evaluation metrics which rely on question answering models to assess whether a summary contains all the relevant information in its source document. Though promising, the proposed approaches have so far failed to correlate better than ROUGE with human judgments. In this paper, we extend previous approaches and propose a unified framework, named QuestEval. In contrast to established metrics such as ROUGE or BERTScore, QuestEval does not require any ground-truth reference. Nonetheless, QuestEval substantially improves the correlation with human judgments over four evaluation dimensions (consistency, coherence, fluency, and relevance), as shown in extensive experiments.

Dates et versions

hal-03923328 , version 1 (04-01-2023)

Identifiants

Citer

Thomas Scialom, Paul-Alexis Dray, Sylvain Lamprier, Benjamin Piwowarski, Jacopo Staiano, et al.. QuestEval: Summarization Asks for Fact-based Evaluation. 2021 Conference on Empirical Methods in Natural Language Processing, Nov 2021, Online and Punta Cana, Dominican Republic. pp.6594-6604, ⟨10.18653/v1/2021.emnlp-main.529⟩. ⟨hal-03923328⟩
42 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More