Authors :
Pavuluri Venkata Naresh Babu; Potnuru Prabhash; Peddina Hari Shankar; Dr. Z. Sunitha Bai
Volume/Issue :
Volume 10 - 2025, Issue 5 - May
Google Scholar :
https://tinyurl.com/fz3ytvvp
DOI :
https://doi.org/10.38124/ijisrt/25may722
Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.
Abstract :
Aspect-Based Sentiment Analysis (ABSA) is a method used to find out how people feel about specific parts (aspects)
of something like the "features" of a laptop or the "service" at a restaurant within a sentence. It plays a big role in analyzing
opinions in reviews. Recently, contrastive learning has become popular in improving ABSA. This learning method helps the
system learn better by comparing examples like learning to tell the difference between a good and a bad review more clearly.
This paper looks at two common contrastive learning methods for ABSA: Sentiment-Based Supervised Contrastive Learning:
This method uses the actual sentiment labels (like "positive" or "negative" or “neutral”) to teach the model what to focus on.
Augmentation-Based Unsupervised Contrastive Learning: This method creates new versions of the same sentence to help the
model understand the meaning, without using sentiment labels.The paper also introduces four new methods to make ABSA
even better: Prompt-Based Contrastive Learning (PromptCL): Uses AI models to create paraphrased sentences with the same
meaning, helping the system learn from different ways of saying the same thing. Aspect-Specific Adversarial Contrastive
Learning (ASACL): Slightly changes words near the aspect being analyzed, so the model becomes better at handling confusing
or noisy inputs. Hierarchical Contrastive Learning (HiCL): Looks at both the whole sentence and specific parts to learn more
complete understanding. Graph-Augmented Contrastive Learning (GraphCL): Uses graphs that show relationships between
words to better understand how opinions are connected to aspects.
Keywords :
Aspect-Based Sentiment Analysis, Supervised Contrastive Learning, Prompt-Based Learning, Adversarial Data augmentation, Sentence Embedding, Graph Neural Networks.
References :
[1]. Chen,T., Kornblith, S., Norouzi, M., Hinton, G., 2020. A simple framework forcontrastive learning of visual representations. In: International Conference onMachine Learning. PMLR, pp. 1597–1607.
[2]. Chen,Z., Qian, T., 2019. Transfer capsule network for aspect level sentiment clas-sification. In: Proceedings of the 57th Annual Meeting of the Association forComputational Linguistics. pp. 547–556.
[3]. Chen,X., Rao, Y., Xie, H., Wang, F.L., Zhao, Y., Yin, J., 2019. Sentiment classificationusing negative and intensive sentiment supplement information. Data Sci. Eng.4,109–118.
[4]. Chen,P.,Sun, Z., Bing, L., Yang, W.,2017. Recurrent attention network on memoryfor aspect sentiment analysis. In: Proceedings of the 2017 Conference on EmpiricalMethods in Natural Language Processing. pp. 452–461.
[5]. Chuang, Y.-S., Dangovski, R., Luo, H., Zhang, Y., Chang, S., Soljacic, M., Li, S.-W., Yih, S., Kim, Y., Glass,J., 2022. DiffCSE: Difference-based contrastive learning for sentence embeddings. In: Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Techno- logies. Association for Computational Linguistics, pp. 4207–4218. http://dx.doi.org/10.18653/v1/2022.naacl-main.311.
[6]. Devlin, J., Chang, M.-W., Lee, K., Toutanova, K., 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810. 805.
[7]. Dong,L., Wei, F., Tan, C., Tang, D., Zhou, M., Xu, K., 2014. Adaptive recursive neuralnetwork for target-dependent twitter sentiment classification. In: Proceedings of the52nd Annual Meeting of the Association for Computational Linguistics (Volume 2:Short Papers). pp. 49–54.
[8]. Fan,F., Feng, Y., Zhao, D., 2018. Multi-grained attention network for aspect-levelsentiment classification. In: Proceedings of the 2018 Conference on EmpiricalMethods in Natural Language Processing. pp. 3433–3442.
[9]. Fang, H., Wang, S., Zhou, M., Ding, J., Xie, P., 2020. Cert: Contrastive self-supervised learning for language understanding. arXiv preprint arXiv:2005.12766.
[10]. Hadsell,R., Chopra, S., LeCun, Y., 2006. Dimensionality reduction by learning anin variant mapping. In: 2006 IEEE Computer Society Conference on Computer Visionand Pattern Recognition (CVPR’06). Vol. 2, IEEE, pp. 1735–1742.
[11]. He,K., Fan,H., Wu,Y., Xie, S., Girshick, R.,2020. Momentum contrast for unsupervisedvisual representation learning. In: Proceedings of the IEEE/CVF Conference onComputer Vision and Pattern Recognition. pp. 9729–9738.
[12]. Huang,X., Rao, Y., Xie, H., Wong, T.-L., Wang, F.L., 2017. Cross-domain sentiment classification via topic-related TrAdaBoost. In: Proceedings of the AAAI Conferenceon Artificial Intelligence. Vol. 31, (1).
[13]. Huang,M., Xie, H., Rao, Y., Liu, Y., Poon, L.K., Wang, F.L., 2020. Lexicon-based sentiment convolutional neural networks for online review analysis. IEEE Trans.Affect. Comput. 13 (3), 1337–1348.
[14]. Jiang,Q., Chen, L., Xu, R., Ao, X., Yang, M., 2019. A challenge dataset and effective models for aspect-based sentiment analysis. In: Proceedings of the 2019 Conferenceon Empirical Methods in Natural Language Processing and the 9th InternationalJoint Conference on Natural Language Processing (EMNLP-IJCNLP). pp. 6280–6285.
[15]. Khan,Z., Fu, Y., 2021. Exploiting BERT for multimodal target sentiment classification through input space translation. In: Proceedings of the 29th ACM International Conference on Multimedia. pp. 3034–3042.
[16]. Khosla,P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu,C.,Krishnan, D., 2020. Supervised contrastive learning. Adv. Neural Inf. Process. Syst.33, 18661–18673.
[17]. Li, Z., Zou, Y., Zhang, C., Zhang, Q., Wei, Z., 2021. Learning implicit sentiment in aspect-based sentiment analysis with supervised contrastive pre-training. arXiv preprint arXiv:2111.02194.
[18]. Liang,B., Luo, W., Li, X., Gui, L., Yang, M., Yu, X., Xu, R., 2021. Enhancing aspect-based sentiment analysis with supervised contrastive learning. In: Proceedings ofthe 30th ACM International Conference on Information & Knowledge Management. pp. 3242–3247.
[19]. Liang,W., Xie, H., Rao, Y., Lau, R.Y., Wang, F.L., 2018. Universal affective model forreaders’ emotion classification over short texts. Expert Syst.Appl. 114, 322–333.
[20]. Lin, N., Fu, Y., Lin, X., Yang, A., Jiang, S., 2022. CL-XABSA: Contrastive learning for cross-lingual aspect-based sentiment analysis. arXiv preprint arXiv:2204.00791.
[21]. Ma, D., Li, S., Zhang, X., Wang, H., 2017. Interactive attention networks for aspect-level sentiment classification. arXiv preprint tarXiv:1709.00893.
[22]. Miller,G.A., 1995. WordNe t: a lexical database for english. Commun. ACM 38 (11), 39-41.
[23]. Pang,J., Rao, Y., Xie, H., Wang, X., Wang, F.L., Wong, T.-L., Li, Q., 2019. Fastsupervised topic models for short text emotion detection. IEEE Trans. Cybern. 51(2), 815–828.
Aspect-Based Sentiment Analysis (ABSA) is a method used to find out how people feel about specific parts (aspects)
of something like the "features" of a laptop or the "service" at a restaurant within a sentence. It plays a big role in analyzing
opinions in reviews. Recently, contrastive learning has become popular in improving ABSA. This learning method helps the
system learn better by comparing examples like learning to tell the difference between a good and a bad review more clearly.
This paper looks at two common contrastive learning methods for ABSA: Sentiment-Based Supervised Contrastive Learning:
This method uses the actual sentiment labels (like "positive" or "negative" or “neutral”) to teach the model what to focus on.
Augmentation-Based Unsupervised Contrastive Learning: This method creates new versions of the same sentence to help the
model understand the meaning, without using sentiment labels.The paper also introduces four new methods to make ABSA
even better: Prompt-Based Contrastive Learning (PromptCL): Uses AI models to create paraphrased sentences with the same
meaning, helping the system learn from different ways of saying the same thing. Aspect-Specific Adversarial Contrastive
Learning (ASACL): Slightly changes words near the aspect being analyzed, so the model becomes better at handling confusing
or noisy inputs. Hierarchical Contrastive Learning (HiCL): Looks at both the whole sentence and specific parts to learn more
complete understanding. Graph-Augmented Contrastive Learning (GraphCL): Uses graphs that show relationships between
words to better understand how opinions are connected to aspects.