Authors :
Lakshin Pathak; Mili Virani; Dhyani Raval; Tvisha Patel
Volume/Issue :
Volume 9 - 2024, Issue 8 - August
Google Scholar :
https://tinyurl.com/44nmchtb
Scribd :
https://tinyurl.com/bdzkvzxv
DOI :
https://doi.org/10.38124/ijisrt/IJISRT24AUG1575
Abstract :
Text summarization is a crucial task in natural
language processing (NLP), aiming to distill extensive
information into concise and coherent summaries.
Traditional summarization methods, including both
extractive and abstractive techniques, face challenges in
generating summaries that balance brevity and
informativeness. This paper explores the application of
Reinforce- ment Learning with Human Feedback (RLHF)
to address these challenges and enhance the quality of text
summarization.We introduce an RLHF-based approach
using the FLAN-T5-small model, which integrates human
feedback into the reinforcement learning framework to
refine summary generation. Our method leverages a dataset
from the Hugging Face datasets library, consisting of
diverse document-summary pairs. The model is pre-trained
on a large corpus and fine-tuned using human feedback,
which serves as a reward signal to guide the model towards
generating more relevant and coherent summaries.Our
experimental results demonstrate that the RLHF-enhanced
model significantly outperforms traditional summarization
methods. Quantitative evaluations using ROUGE and
BLEU metrics reveal substantial improvements in
summary quality, with increases of up to 12.5% in ROUGE-
1 and 9.8% in BLEU scores over baseline methods.
Qualitative assessments by human evaluators further
confirm that the RLHF-based model produces summaries
that are more aligned with human expectations in terms of
coherence and relevance.This study highlights the potential
of RLHF to overcome the limitations of conventional
summarization tech- niques, offering a robust framework
for generating high-quality summaries across various
domains. Future work will explore the scalability of this
approach to more complex summarization tasks and the
integration of additional feedback mechanisms to further
enhance performance.
Keywords :
Reinforcement Learning, Human Feedback, Text Summarization, Natural Language Processing.
References :
- S. Ryang and T. Abekawa, “Framework of automatic text summariza- tion using reinforcement learning,” in Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pp. 256–265, 2012.
- T. Wu, S. He, J. Liu, S. Sun, K. Liu, Q.-L. Han, and Y. Tang, “A brief overview of chatgpt: The history, status quo and potential future development,” IEEE/CAA Journal of Automatica Sinica, vol. 10, no. 5, pp. 1122–1136, 2023.
- Alomari, N. Idris, A. Q. M. Sabri, and I. Alsmadi, “Deep reinforcement and transfer learning for abstractive text summarization: A review,” Computer Speech & Language, vol. 71, p. 101276, 2022.
- Y. Sun and J. Platosˇ, “Abstractive text summarization model combining a hierarchical attention mechanism and multiobjective reinforcement learning,” Expert Systems with Applications, vol. 248, p. 123356, 2024.
- K. Yao, L. Zhang, T. Luo, and Y. Wu, “Deep reinforcement learning for extractive document summarization,” Neurocomputing, vol. 284, pp. 52– 62, 2018.
- Y. K. Atri, V. Goyal, and T. Chakraborty, “Multi-document summarization using selective attention span and reinforcement learning,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2023
- B. Shukla, S. Gupta, A. K. Yadav, and D. Yadav, “Text summarization of legal documents using reinforcement learning: A study,” in Intelligent Sustainable Systems: Proceedings of ICISS 2022, pp. 403–414, Springer, 2022.
- S. Lamsiyah, A. El Mahdaouy, A. Nourbakhsh, and C. Schommer, “Fine-tuning a large language model with reinforcement learning for educational question generation,” in International Conference on Artificial Intelligence in Education, pp. 424–438, Springer, 2024.
Text summarization is a crucial task in natural
language processing (NLP), aiming to distill extensive
information into concise and coherent summaries.
Traditional summarization methods, including both
extractive and abstractive techniques, face challenges in
generating summaries that balance brevity and
informativeness. This paper explores the application of
Reinforce- ment Learning with Human Feedback (RLHF)
to address these challenges and enhance the quality of text
summarization.We introduce an RLHF-based approach
using the FLAN-T5-small model, which integrates human
feedback into the reinforcement learning framework to
refine summary generation. Our method leverages a dataset
from the Hugging Face datasets library, consisting of
diverse document-summary pairs. The model is pre-trained
on a large corpus and fine-tuned using human feedback,
which serves as a reward signal to guide the model towards
generating more relevant and coherent summaries.Our
experimental results demonstrate that the RLHF-enhanced
model significantly outperforms traditional summarization
methods. Quantitative evaluations using ROUGE and
BLEU metrics reveal substantial improvements in
summary quality, with increases of up to 12.5% in ROUGE-
1 and 9.8% in BLEU scores over baseline methods.
Qualitative assessments by human evaluators further
confirm that the RLHF-based model produces summaries
that are more aligned with human expectations in terms of
coherence and relevance.This study highlights the potential
of RLHF to overcome the limitations of conventional
summarization tech- niques, offering a robust framework
for generating high-quality summaries across various
domains. Future work will explore the scalability of this
approach to more complex summarization tasks and the
integration of additional feedback mechanisms to further
enhance performance.
Keywords :
Reinforcement Learning, Human Feedback, Text Summarization, Natural Language Processing.