Modification and Extension of a Neural Question Answering System with Attention and Feature Variants


Authors : Jebaraj Vasudevan

Volume/Issue : Volume 10 - 2025, Issue 3 - March


Google Scholar : https://tinyurl.com/46h3t9tz

Scribd : https://tinyurl.com/7bfmj48n

DOI : https://doi.org/10.38124/ijisrt/25mar009

Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.


Abstract : This paper presents an improved version of the baseline DrQA Question Answering model on the SQuAD dataset. More specifically, how a single model Bi-LSTMs trained only on the SQuAD train dataset shows an improved performance of 5-6% on both the SquAD dev set and the Adversarial SQuAD dataset. Also, different attention mechanisms were explored to see if it would help to better capture the interactions between the context and the question.

References :

  1. Chen, D., Fisch, A., Weston, J., & Bordes, A. (2017). Reading Wikipedia to Answer Open-Domain Questions. Association for Computational Linguistics (ACL).
  2. Jia, R., & Liang, P. (2017). Adversarial Examples for Evaluating Reading Comprehension Systems. Empirical Methods in Natural Language Processing (EMNLP).
  3. Rajpurkar, P., Zhang, J., Lopyrev, K., & Liang, P. (2016). SQuAD: 100,000+ Questions for Machine Comprehension of Text. Empirical Methods in Natural Language Processing (EMNLP).
  4. Seo, M., Kembhavi, A., Farhadi, A., & Hajishirzi, H. (2017). Bidirectional Attention Flow for Machine Comprehension. The International Conference on Learning Representations (ICLR).
  5. Yerukola, A., & Kamath, A. (2018). Adversarial SQuAD.

This paper presents an improved version of the baseline DrQA Question Answering model on the SQuAD dataset. More specifically, how a single model Bi-LSTMs trained only on the SQuAD train dataset shows an improved performance of 5-6% on both the SquAD dev set and the Adversarial SQuAD dataset. Also, different attention mechanisms were explored to see if it would help to better capture the interactions between the context and the question.

Never miss an update from Papermashup

Get notified about the latest tutorials and downloads.

Subscribe by Email

Get alerts directly into your inbox after each post and stay updated.
Subscribe
OR

Subscribe by RSS

Add our RSS to your feedreader to get regular updates from us.
Subscribe