Authors :
Navin Kumar Sehgal; Antim Dev Mishra
Volume/Issue :
Volume 10 - 2025, Issue 12 - December
Google Scholar :
https://tinyurl.com/3x4ra6jh
Scribd :
https://tinyurl.com/3uks8u6b
DOI :
https://doi.org/10.38124/ijisrt/25dec1061
Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.
Note : Google Scholar may take 30 to 40 days to display the article.
Abstract :
Recommender systems traditionally rely on historical interaction data and latent-factor models, which often fail
to capture users’ dynamic intentions, contextual preferences, and short-term goals. This study proposes a hybrid
recommendation framework that integrates Large Language Model (LLM)-derived user control variables with matrix
factorization to improve prediction accuracy and model responsiveness. Using natural-language prompts, the LLM extracts
four structured control features—Perspective, Variation, Organizing, and Restore—which represent user intent,
exploration preference, active interest clusters, and noise reduction signals. These control variables are fused with user and
item latent factors through a control-aware rating function that adjusts the baseline matrix factorization output.
Experimental evaluation on the Book-Crossing dataset demonstrates that incorporating LLM-derived controls reduces
RMSE by up to 7.8% and increases Precision by 12.3% compared to standard matrix factorization. Additional analysis
shows improved robustness against noisy historical data and increased alignment between recommended items and users
stated short-term objectives. The findings highlight the effectiveness of semantic user-control extraction in enhancing
recommender accuracy and provide a scalable path for integrating intent-aware mechanisms in modern personalized
Keywords :
Recommender Systems, Large Language Models (LLMs), User Control Signals, Matrix Factorization, Hybrid Recommendation Framework, Intent-Aware Personalization, Preference Modeling, Semantic Feature Extraction, Accuracy Enhancement, User-Centric Recommendation, Context-Aware Recommendations, Control-Driven Recommenders.
References :
- Y. Koren, R. Bell, and C. Volinsky, “Matrix factorization techniques for recommender systems,” Computer, vol. 42, no. 8, pp. 30–37, 2009.
- G. Adomavicius and A. Tuzhilin, “Context-aware recommender systems,” AI Magazine, vol. 32, no. 3, pp. 67–80, 2011.
- R. Burke, “Hybrid recommender systems: Survey and experiments,” User Modeling and User-Adapted Interaction, vol. 12, no. 4, pp. 331–370, 2002.
- T. B. Brown, B. Mann, N. Ryder, et al., “Language models are few-shot learners,” Advances in Neural Information Processing Systems, vol. 33, pp. 1877–1901, 2020.
- J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of deep bidirectional transformers for language understanding,” arXiv preprint arXiv:1810.04805, 2018.
- C.-N. Ziegler, “Book-Crossing Dataset,” University of Freiburg, Germany, 2004.
- Y. Hu, Y. Koren, and C. Volinsky, “Collaborative filtering for implicit feedback datasets,” Proc. IEEE Int. Conf. Data Mining, pp. 263–272, 2008.
- J. L. Herlocker, J. A. Konstan, L. G. Terveen, and J. T. Riedl, “Evaluating collaborative filtering recommender systems,” ACM Transactions on Information Systems, vol. 22, no. 1, pp. 5–53, 2004.
- D. Jannach, A. Manzoor, M. Oulabi, and M. Ashok, “A survey on conversational recommender systems,” ACM Computing Surveys, vol. 54, no. 5, pp. 1–38, 2021.
- OpenAI, “GPT-4 technical report,” arXiv preprint arXiv:2303.08774, 2023.
- S. Zhang, L. Yao, A. Sun, and Y. Tay, “Deep learning based recommender system: A survey and new perspectives,” ACM Computing Surveys, vol. 52, no. 1, pp. 1–38, 2019.
- F. M. Harper and J. A. Konstan, “The MovieLens datasets: History and context,” ACM Transactions on Interactive Intelligent Systems, vol. 5, no. 4, Article 19, 2016.
Recommender systems traditionally rely on historical interaction data and latent-factor models, which often fail
to capture users’ dynamic intentions, contextual preferences, and short-term goals. This study proposes a hybrid
recommendation framework that integrates Large Language Model (LLM)-derived user control variables with matrix
factorization to improve prediction accuracy and model responsiveness. Using natural-language prompts, the LLM extracts
four structured control features—Perspective, Variation, Organizing, and Restore—which represent user intent,
exploration preference, active interest clusters, and noise reduction signals. These control variables are fused with user and
item latent factors through a control-aware rating function that adjusts the baseline matrix factorization output.
Experimental evaluation on the Book-Crossing dataset demonstrates that incorporating LLM-derived controls reduces
RMSE by up to 7.8% and increases Precision by 12.3% compared to standard matrix factorization. Additional analysis
shows improved robustness against noisy historical data and increased alignment between recommended items and users
stated short-term objectives. The findings highlight the effectiveness of semantic user-control extraction in enhancing
recommender accuracy and provide a scalable path for integrating intent-aware mechanisms in modern personalized
Keywords :
Recommender Systems, Large Language Models (LLMs), User Control Signals, Matrix Factorization, Hybrid Recommendation Framework, Intent-Aware Personalization, Preference Modeling, Semantic Feature Extraction, Accuracy Enhancement, User-Centric Recommendation, Context-Aware Recommendations, Control-Driven Recommenders.