Professional Documents
Culture Documents
Abstractive Text Summarization Using Hybrid Methods
Abstractive Text Summarization Using Hybrid Methods
Rouge Precision =
Rouge F1-Score =
Evaluation Metrics cont.
● Rouge - L : Measures longest common subsequence (LCS) between
our model output and reference
Recall =
Precision =
F1-Score =
[2] Y. Wu, B. Hu, Learning to extract coherent summary via deep reinforcement learning, in:
Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th
innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on
Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February
2-7, 2018, 2018, pp. 5602–5609.
[3] R. Nallapati, F. Zhai, B. Zhou, Summarunner: A recurrent neural network based sequence model
for extractive summarization of documents, in: Proceedings of the Thirty-First AAAI Conference on
Artificial Intelligence, February 4-9, 2017, San Francisco, California, USA., 2017, pp. 3075–3081.
References(contd.)
[4] E. Sharma, L. Huang, Z. Hu, L. Wang, An entity-driven framework for abstractive summarization,
in: K. Inui, J. Jiang, V. Ng, X. Wan (Eds.), Proceedings of the 2019 Conference on Empirical Methods
in Natural Language Processing and the 9th International Joint Conference on Natural Language
Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, Association for
Computational Linguistics, 2019, pp. 3278– 3289. doi:10.18653/v1/D19-1323.
[5] A. See, P. J. Liu, C. D. Manning, Get to the point: Summarization with pointer-generator
networks, in: Proceedings of the 55th Annual Meeting of the Association for Computational
Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, 2017, pp.
1073–1083.
[6] J. Xu, G. Durrett, Neural extractive text summarization with syntactic compression, in:
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the
9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Association
for Computational Linguistics, Hong Kong, China, 2019, pp. 3283–3294.
References(contd.)
[7] Chin-Yew Lin, “Rouge: A Package for Automatic Evaluation of Summaries.” Barcelona Spain,
Workshop o Text Summarization Branches Out, Post- Conference Workshop of ACL 2004.
[8] J. Devlin, M. Chang, K. Lee, K. Toutanova, BERT: pre-training of deep bidirectional transformers
for language understanding, in: Proceedings of the 2019 Conference of the North American
Chapter of the Association for Computational Linguistics: Human Language Technologies,
NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), 2019.