Authors :
Jebaraj Vasudevan
Volume/Issue :
Volume 10 - 2025, Issue 3 - March
Google Scholar :
https://tinyurl.com/46h3t9tz
Scribd :
https://tinyurl.com/7bfmj48n
DOI :
https://doi.org/10.38124/ijisrt/25mar009
Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.
Abstract :
This paper presents an improved version of the baseline DrQA Question Answering model on the SQuAD dataset.
More specifically, how a single model Bi-LSTMs trained only on the SQuAD train dataset shows an improved performance of
5-6% on both the SquAD dev set and the Adversarial SQuAD dataset. Also, different attention mechanisms were explored to
see if it would help to better capture the interactions between the context and the question.
References :
- Chen, D., Fisch, A., Weston, J., & Bordes, A. (2017). Reading Wikipedia to Answer Open-Domain Questions. Association for Computational Linguistics (ACL).
- Jia, R., & Liang, P. (2017). Adversarial Examples for Evaluating Reading Comprehension Systems. Empirical Methods in Natural Language Processing (EMNLP).
- Rajpurkar, P., Zhang, J., Lopyrev, K., & Liang, P. (2016). SQuAD: 100,000+ Questions for Machine Comprehension of Text. Empirical Methods in Natural Language Processing (EMNLP).
- Seo, M., Kembhavi, A., Farhadi, A., & Hajishirzi, H. (2017). Bidirectional Attention Flow for Machine Comprehension. The International Conference on Learning Representations (ICLR).
- Yerukola, A., & Kamath, A. (2018). Adversarial SQuAD.
This paper presents an improved version of the baseline DrQA Question Answering model on the SQuAD dataset.
More specifically, how a single model Bi-LSTMs trained only on the SQuAD train dataset shows an improved performance of
5-6% on both the SquAD dev set and the Adversarial SQuAD dataset. Also, different attention mechanisms were explored to
see if it would help to better capture the interactions between the context and the question.