重庆理工大学学报(自然科学) ›› 2024, Vol. 38 ›› Issue (2): 170-180.

• 信息计算机 • 上一篇    下一篇

以对比学习与时序递推提升摘要泛化性的方法

汤文亮,陈帝佑,桂玉杰,刘杰明,徐军亮   

  1. 华东交通大学信息工程学院
  • 出版日期:2024-03-22 发布日期:2024-03-22
  • 作者简介:汤文亮,男,教授,主要从事物联网、人工智能研究,E-mail:twlecjtu@163.com;通信作者陈帝佑,男,硕士研究生,主要从事自然语言处理研究,E-mail:Taloncdy666@163.com。

Im proving generalization of summarization with contrastive learning and temporal recursion

  • Online:2024-03-22 Published:2024-03-22

摘要: 为了有效缓解基于交叉熵损失函数训练的传统文本摘要模型所面临的推理过程中性能下降、泛化性较低、生成过程中曝光偏差现象严重、生成的摘要与参考摘要文本相似度较低等问题,提出了一种新颖的训练方式,一方面,模型本身以beamsearch的方式生成候选集,以候选摘要的评估分数选取正负样本,在输出的候选集中以“argmax-贪心搜索概率值”和“标签概率值“构建2组对比损失函数;另一方面,设计作用于候选集句内的时序递推函数引导模型在输出每个单独的候选摘要时确保时序准确性,并缓解曝光偏差问题。实验表明,所提方法在CNN/Daily Mail和Xsum公共数据集上的泛化性得到提升,Rouge与Bert Score在CNN/Daily Mail上达到47.54和88.51,在Xsum上达到了48.75和92.61。

关键词: 自然语言处理, 文本摘要, 对比学习, 模型微调

Abstract: To address the problems of the traditional text summarization models trained based on cross-entropy loss functions,such as degraded performance during inference,low generalization,serious exposure bias phenomenon during generation,and low similarity between the generated summary and the reference summary text,a novel training approach is proposed in this paper.On the one hand,the model itself generates a candidate set using beam search and selects positive and negative samples based on the evaluation scores of the candidate summaries.Two sets of contrastive loss functions are built using“argmax-greedy search probability values”and“label probability values”within the output candidate set.On the other hand,a time-series recursive function designed to operate on the candidate set’s sentences guides the model to ensure temporal accuracy when outputting each individual candidate summary and mitigates exposure bias.Our experiments show the method significantly improves the generalization performance on the CNN/Daily Mail and Xsum public datasets.The Rouge and Bert Score reach 47.54 and 88.51 respectively on CNN/Daily Mail while they reach 48.75 and 92.61 on Xsum.

中图分类号: 

  • TP391.1