Emotion Detection and Music Recommendation Using Deep Learning and Computer Vision


Authors : Kavnaa B N; Preethi K P

Volume/Issue : Volume 10 - 2025, Issue 8 - August


Google Scholar : https://tinyurl.com/bdhne7j8

Scribd : https://tinyurl.com/55kze833

DOI : https://doi.org/10.38124/ijisrt/25aug1648

Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.

Note : Google Scholar may take 30 to 40 days to display the article.


Abstract : This report presents a comprehensive approach to integrating emotion detection with music recommendation systems, leveraging the power of deep learning and computer vision. The primary objective is to create a personalized music experience by analyzing a user's real-time emotional state through facial expressions. We propose a system that utilizes a Convolutional Neural Network (CNN) for accurate emotion recognition from live video feeds or static images. The detected emotions (e.g., happy, sad, angry, neutral) are then mapped to a curated music database, where songs are categorized or tagged based on their emotional valence and arousal. This mapping allows the system to recommend music that either matches or aims to influence the user's current mood, providing a more intuitive and empathetic user experience than traditional content-based or collaborative filtering methods. Experimental results demonstrate the effectiveness of the CNN model in emotion classification and the feasibility of generating emotionally intelligent music recommendations, opening new avenues for adaptive user interfaces and personalized media consumption.

Keywords : Emotion Detection, Music Recommendation, Deep Learning, Computer Vision, Convolutional Neural Network (CNN), Affective Computing, Human–Computer Interaction, Recommender Systems, Facial Expression Recognition, Personalized Multimedia.

References :

  1. P. Ekman and W. V. Friesen, "Measuring facial movement," Environmental Psychology and Nonverbal Behavior, vol. 1, no. 1, pp. 56-75, 1976. (Conceptual basis for facial expressions)
  2. T. Ojala, M. Pietikainen, and D. Harwood, "A comparative study of texture measures for feature selection and classification," Pattern Recognition, vol. 29, no. 1, pp. 51-59, 1996. (LBP reference)
  3. S. Rifai, P. Vincent, X. Muller, X. Glorot, and Y. Bengio, "Contractive auto-encoders: Explicit invariance during feature learning," in Proc. ICML, 2011, pp. 833-840. (FER- 2013 dataset context)
  4. B. Mollah, T. Siddique, and M. I. H. Khan, "Facial Expression Recognition using Convolutional Neural Network," in Proc. IEEE ICIOT, 2020, pp. 1-6. (General CNN for FER)
  5. S. Li and W. Deng, "Deep Facial Expression Recognition: A Survey," IEEE Transactions on Affective Computing, vol. 13, no. 3, pp. 1772-1793, 2022. (Recent survey on Deep FER)
  6. C. C. Aggarwal, "Recommender Systems: The Textbook," Springer, 22016. (General Recommender Systems textbook)
  7. J. B. Schafer, J. Konstan, and J. Riedl, "Recommender systems in e-commerce," in Proc. ACM EC, 1999, pp. 158-166. (Early Collaborative Filtering reference)
  8. R. Burke, "Hybrid recommender systems: Survey and experiments," User Modeling and User-Adapted Interaction, vol. 12, no. 4, pp. 331-370, 2002. (Hybrid Recommender Systems)
  9. A. Huynh, E. M. M. Kuijpers, and A. Schiesser, "Music Recommendation System based on Emotion and Mood," Journal of Machine Learning Research, 2015. (Early emotion-based music rec)
  10. J. S. Li, J. Y. Lee, H. S. Chung, and B. T. Zhang, "EEG-based music recommendation system using deep learning," in Proc. IEEE EMBC, 2017, pp. 433-436. (Physiological signals for music rec)
  1. T. S. Han, H. S. Ko, and M. Y. Sung, "Music emotion recognition for recommendation system using deep learning," in Proc. IEEE ICAIS, 2018, pp. 1-4. (Audio-based music emotion recognition)
  2. K. P. Singh and B. Singh, "Emotion detection from text for music recommendation," in Proc. IEEE ICCS, 2018, pp. 1-4. (Text-based emotion for music rec)
  3. S. Rahman, A. Hossain, and S. Iqbal, "Mood-Based Music Recommendation System Using Facial Expression," in Proc. IEEE TENCON, 2019, pp. 2000-2005. (Recent work on facial expression for music rec).

This report presents a comprehensive approach to integrating emotion detection with music recommendation systems, leveraging the power of deep learning and computer vision. The primary objective is to create a personalized music experience by analyzing a user's real-time emotional state through facial expressions. We propose a system that utilizes a Convolutional Neural Network (CNN) for accurate emotion recognition from live video feeds or static images. The detected emotions (e.g., happy, sad, angry, neutral) are then mapped to a curated music database, where songs are categorized or tagged based on their emotional valence and arousal. This mapping allows the system to recommend music that either matches or aims to influence the user's current mood, providing a more intuitive and empathetic user experience than traditional content-based or collaborative filtering methods. Experimental results demonstrate the effectiveness of the CNN model in emotion classification and the feasibility of generating emotionally intelligent music recommendations, opening new avenues for adaptive user interfaces and personalized media consumption.

Keywords : Emotion Detection, Music Recommendation, Deep Learning, Computer Vision, Convolutional Neural Network (CNN), Affective Computing, Human–Computer Interaction, Recommender Systems, Facial Expression Recognition, Personalized Multimedia.

CALL FOR PAPERS


Paper Submission Last Date
30 - November - 2025

Video Explanation for Published paper

Never miss an update from Papermashup

Get notified about the latest tutorials and downloads.

Subscribe by Email

Get alerts directly into your inbox after each post and stay updated.
Subscribe
OR

Subscribe by RSS

Add our RSS to your feedreader to get regular updates from us.
Subscribe