Authors :
Wrick Talukdar; Anjanava Biswas
Volume/Issue :
Volume 9 - 2024, Issue 5 - May
Google Scholar :
https://tinyurl.com/wjeb3md9
Scribd :
https://tinyurl.com/r4mfwe4y
DOI :
https://doi.org/10.38124/ijisrt/IJISRT24MAY2087
Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.
Abstract :
While supervised learning models have shown
remarkable performance in various natural language
processing (NLP) tasks, their success heavily relies on the
availability of large-scale labeled datasets, which can be
costly and time-consuming to obtain. Conversely,
unsupervised learning techniques can leverage abundant
unlabeled text data to learn rich representations, but they
do not directly optimize for specific NLP tasks. This paper
presents a novel hybrid approach that synergizes
unsupervised and supervised learning to improve the
accuracy of NLP task modeling. While supervised models
excel at specific tasks, they rely on large labeled datasets.
Unsupervised techniques can learn rich representations
from abundant unlabeled text but don't directly optimize
for tasks. Our methodology integrates an unsupervised
module that learns representations from unlabeled
corpora (e.g., language models, word embeddings) and a
supervised module that leverages these representations to
enhance task-specific models [4]. We evaluate our
approach on text classification and named entity
recognition (NER), demonstrating consistent performance
gains over supervised baselines. For text classification,
contextual word embeddings from a language model
pretrain a recurrent or transformer-based classifier. For
NER, word embeddings initialize a BiLSTM sequence
labeler. By synergizing techniques, our hybrid approach
achieves SOTA results on benchmark datasets, paving the
way for more data-efficient and robust NLP systems.
Keywords :
Supervised Learning, Unsupervised Learning, Natural Language Processing (NLP).
References :
- Radford A, Narasimhan K, Salimans T, Sutskever I. Improving language understanding by generative pre-training. OpenAI. 2018.
- Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need. Advances in Neural Information Processing Systems. 2017;30:5998-6008.
- Marcus MP, Marcinkiewicz MA, Santorini B. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics. 1993;19(2):313-330.
- Mikolov T, Chen K, Corrado G, Dean J. Efficient estimation of word representations in vector space. Proceedings of the 1st International Conference on Learning Representations, ICLR. 2013.
- Devlin J, Chang MW, Lee K, Toutanova K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:1810.04805. 2018.
- Dai, A. M., & Le, Q. V. (2015). Semi-supervised sequence learning. Advances in neural information processing systems, 28.
- Peters, M. E., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K., & Zettlemoyer, L. (2018). Deep contextualized word representations. arXiv preprint arXiv:1802.05365.
- Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
- Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R., & Le, Q. V. (2019). XLNet: Generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237.
- Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., ... & Stoyanov, V. (2019). Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
- Zhang X, Zhao J, LeCun Y. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems. 2015;28:649-657.
- Pennington J, Socher R, Manning CD. GloVe: Global Vectors for Word Representation. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). 2014;1532-1543.
- Tjong Kim Sang EF, De Meulder F. Introduction to the CoNLL-2003 Shared Task: Language-Independent Named Entity Recognition. Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003. 2003;142-147.
- Lample G, Ballesteros M, Subramanian S, Kawakami K, Dyer C. Neural Architectures for Named Entity Recognition. Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 2016;260-270.
- Søgaard A, Goldberg Y. Deep Multi-Task Learning with Low Level Tasks Supervised at Lower Layers. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). 2016;231-235.
- Erik F. Tjong Kim Sang and Jorn Veenstra. 1999. Representing Text Chunks. In Ninth Conference of the European Chapter of the Association for Computational Linguistics, pages 173–179, Bergen, Norway. Association for Computational Linguistics.
- McNemar Q. Note on the sampling error of the difference between correlated proportions or percentages. Psychometrika. 1947;12(2):153-157. doi:10.1007/BF02295996.
- Dietterich TG. Approximate statistical tests for comparing supervised classification learning algorithms. Neural Computation. 1998;10(7):1895-1923.
- [Web] How to Calculate McNemar's Test to Compare Two Machine Learning Classifiers. Machine Learning Mastery. Available from: https://machinelearningmastery.com/mcnemars-test-for-machine-learning/
- [Web] Student's t-test for paired samples. In: Statistical Methods for Research Workers. 1925. Available from: https://en.wikipedia.org/wiki/Student's_t-test#Paired_samples
- Hsu, Henry & Lachenbruch, Peter. (2008). Paired t Test. 10.1002/9780471462422.eoct969.
While supervised learning models have shown
remarkable performance in various natural language
processing (NLP) tasks, their success heavily relies on the
availability of large-scale labeled datasets, which can be
costly and time-consuming to obtain. Conversely,
unsupervised learning techniques can leverage abundant
unlabeled text data to learn rich representations, but they
do not directly optimize for specific NLP tasks. This paper
presents a novel hybrid approach that synergizes
unsupervised and supervised learning to improve the
accuracy of NLP task modeling. While supervised models
excel at specific tasks, they rely on large labeled datasets.
Unsupervised techniques can learn rich representations
from abundant unlabeled text but don't directly optimize
for tasks. Our methodology integrates an unsupervised
module that learns representations from unlabeled
corpora (e.g., language models, word embeddings) and a
supervised module that leverages these representations to
enhance task-specific models [4]. We evaluate our
approach on text classification and named entity
recognition (NER), demonstrating consistent performance
gains over supervised baselines. For text classification,
contextual word embeddings from a language model
pretrain a recurrent or transformer-based classifier. For
NER, word embeddings initialize a BiLSTM sequence
labeler. By synergizing techniques, our hybrid approach
achieves SOTA results on benchmark datasets, paving the
way for more data-efficient and robust NLP systems.
Keywords :
Supervised Learning, Unsupervised Learning, Natural Language Processing (NLP).