Authors :
Vikash; Anuj Sharma; Mohit; Jitesh
Volume/Issue :
Volume 10 - 2025, Issue 10 - October
Google Scholar :
https://tinyurl.com/5n7s9bce
Scribd :
https://tinyurl.com/49s396sa
DOI :
https://doi.org/10.38124/ijisrt/25oct1504
Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.
Note : Google Scholar may take 30 to 40 days to display the article.
Abstract :
The task of generating appropriate and engaging research paper titles has gained significant attention with the
rise of large-scale digital repositories and natural language processing (NLP) advancements. This study investigates the
use of recurrent neural networks (RNNs) and deep learning techniques to automate the generation of scientific paper titles
from abstract text, leveraging datasets. The research focuses on model design, training strategies, and data preprocessing
techniques to effectively capture semantic and contextual information for accurate title prediction. Through a comparative
evaluation of traditional RNN-based approaches and advanced sequence-to-sequence architectures, we analyze the
models’ performance in terms of syntactic coherence, relevance, and fluency. Key challenges addressed include overfitting,
data sparsity, and semantic drift between input abstracts and generated titles. The study highlights trade-offs between
model complexity, training time, and output quality, offering insights into optimizing neural networks for title generation.
Future research directions emphasize integrating transformer-based models, enhancing abstract-to-title alignment, and
reducing dependence on large annotated datasets. The results contribute to the broader understanding of automated
scientific writing tools and their applications in academic content generation and metadata enrichment.
Keywords :
NLP, RNN, Title Generation, Scientific Writing Automation, Deep Learning.
References :
- Ofori-Boateng, Regina, et al. "Towards the automation of systematic reviews using natural language processing, machine learning, and deep learning: a comprehensive review." Artificial intelligence review 57.8 (2024): 200.
- Mishra, Prakhar, et al. "Automatic title generation for text with pre-trained transformer language model." 2021 IEEE 15th International Conference on Semantic Computing (ICSC). IEEE, 2021.
- Aftiss, A., Lamsiyah, S., Schommer, C., & El Alaoui, S. O. (2025, January). Empirical Evaluation of Pre-trained Language Models for Summarizing Moroccan Darija News Articles. In Proceedings of the 4th Workshop on Arabic Corpus Linguistics (WACL-4) (pp. 77-85).
- J. Sarmah, M. L. Saini, A. Kumar, and V. Chasta, “Performance Analysis of Deep CNN, YOLO, and LeNet for Handwritten Digit Classification,” Lect. Notes Networks Syst., vol. 844, pp. 215–227, 2024, doi: 10.1007/978-981-99-8479-4_16.
- M. Lal Saini, B. Tripathi, and M. S. Mirza, “Evaluating the Performance of Deep Learning Models in Handwritten Digit Recognition,” Proc. - Int. Conf. Technol. Adv. Comput. Sci. ICTACS 2023, pp. 116–121, 2023, doi: 10.1109/ICTACS59847.2023.10390027
- Kang, B., & Shin, Y. (2024, May). Improving Low-Resource Keyphrase Generation through Unsupervised Title Phrase Generation. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024) (pp. 8853-8865).
- Mishra, Prakhar, et al. "Automatic title generation for learning resources and pathways with pre-trained transformer models." International Journal of Semantic Computing 15.04 (2021): 487-510.
- M. L. Saini, R. S. Telikicharla, Mahadev, and D. C. Sati, “Handwritten English Script Recognition System Using CNN and LSTM,” Proc. InC4 2024 - 2024 IEEE Int. Conf. Contemp. Comput. Commun., 2024, doi: 10.1109/InC460750.2024.10649099
- A. K. Kushwaha, S. Jadon, P. Kamal, M. L. Saini, and V. M. Shrimal, “Comments and Feedback Verification System using Large Language Model,” in 8th IEEE International Conference on Computational System and Information Technology for Sustainable Solutions, CSITSS 2024, 2024. doi: 10.1109/CSITSS64042.2024.10816848
- Fatima, N., Imran, A. S., Kastrati, Z., Daudpota, S. M., & Soomro, A. (2022). A systematic literature review on text generation using deep neural network models. IEEE Access, 10, 53490-53503.
- M. Sohail, M. Lal Saini, V. P. Singh, S. Dhir, and V. Patel, “A Comparative Study of Machine Learning and Deep Learning Algorithm for Handwritten Digit Recognition,” Proc. Int. Conf. Contemp. Comput. Informatics, IC3I 2023, pp. 1283–1288, 2023, doi: 10.1109/IC3I59117.2023.10397956
- M. L. Saini, A. Patnaik, Mahadev, D. C. Sati, and R. Kumar, “Deepfake Detection System Using Deep Neural Networks,” 2024 2nd Int. Conf. Comput. Commun. Control. IC4 2024, 2024, doi: 10.1109/IC457434.2024.10486659
- D. Gupta, M. L. Saini, S. P. K. Mygapula, S. Maji, and V. Prabhas, “Generating Realistic Images Through GAN,” in Proceedings - 4th International Conference on Technological Advancements in Computational Sciences, ICTACS 2024, 2024, pp. 1378–1382. doi: 10.1109/ICTACS62700.2024.10841324
- B. Mulakala, M. L. Saini, A. Singh, V. Bhukya, and A. Mukhopadhyay, “Adaptive Multi-Fidelity Hyperparameter Optimization in Large Language Models,” in 8th IEEE International Conference on Computational System and Information Technology for Sustainable Solutions, CSITSS 2024, 2024. doi: 10.1109/CSITSS64042.2024.10816794
- C. Sasidhar, M. L. Saini, M. Charan, A. V. Shivanand, and V. M. Shrimal, “Image Caption Generator Using LSTM,” Proc. - 4th Int. Conf. Technol. Adv. Comput. Sci. ICTACS 2024, pp. 1781–1786, 2024, doi: 10.1109/ICTACS62700.2024.10841294
- Qin, Xuan, et al. "Natural language processing was effective in assisting rapid title and abstract screening when updating systematic reviews." Journal of clinical epidemiology 133 (2021): 121-129.
- Trappey, Amy JC, et al. "Intelligent compilation of patent summaries using machine learning and natural language processing techniques." Advanced Engineering Informatics 43 (2020): 101027.
The task of generating appropriate and engaging research paper titles has gained significant attention with the
rise of large-scale digital repositories and natural language processing (NLP) advancements. This study investigates the
use of recurrent neural networks (RNNs) and deep learning techniques to automate the generation of scientific paper titles
from abstract text, leveraging datasets. The research focuses on model design, training strategies, and data preprocessing
techniques to effectively capture semantic and contextual information for accurate title prediction. Through a comparative
evaluation of traditional RNN-based approaches and advanced sequence-to-sequence architectures, we analyze the
models’ performance in terms of syntactic coherence, relevance, and fluency. Key challenges addressed include overfitting,
data sparsity, and semantic drift between input abstracts and generated titles. The study highlights trade-offs between
model complexity, training time, and output quality, offering insights into optimizing neural networks for title generation.
Future research directions emphasize integrating transformer-based models, enhancing abstract-to-title alignment, and
reducing dependence on large annotated datasets. The results contribute to the broader understanding of automated
scientific writing tools and their applications in academic content generation and metadata enrichment.
Keywords :
NLP, RNN, Title Generation, Scientific Writing Automation, Deep Learning.