Real Time Emotion Recognition and Classification for Diverse Suggestions using Deep Learning

A Comprehensive Survey

Abstract

At the captivating nexus of technology and human emotions, we harness cutting-edge software to discern individuals' feelings through their facial expressions. This unique capability empowers us to provide customized recommendations for various forms of entertainment and enrichment, such as movies, music, books, and meditation practices. For instance, if a user appears sad, we can offer upbeat music to brighten their mood. Our ultimate objective is to enhance technology comprehension of emotions, enabling it to suggest content that resonates with people's emotional states. Through this integration of facial expressions and diversified suggestions, here aim is to cultivate a more user-friendly and supportive digital environment, fostering feelings of happiness and calmness. Exploring the captivating domain of facial emotion recognition and recommendation systems is the central emphasis of this endeavor. The primary ambition is to construct a framework capable of deciphering user emotions from facial cues. By using deep learning techniques such as the CNN algorithm which can be applied to facial images to process emotion recognition feature vectors with the categorization of individualized content. It attempts to form a balanced blend of advanced tech and boost user engagement, nurturing emotional connections in the digital domain.

Downloads

Download data is not yet available.

References

M. Li, H. Xu, X. Huang, Z. Song, X. Liu and X. Li, “Facial Expression Recognition with Identity and Emotion Joint Learning,” in IEEE Transactions on Affective Computing, vol. 12, no. 2, pp. 544-550, 1 April-June 2021, doi:10.1109/TAFFC.2018.2880201.

H. -G. Kim, G. Y. Lee and M. -S. Kim, “Dual-Function Integrated Emotion-Based Music Classification System Using Features From Physiological Signals,” in IEEE Transactions on Consumer Electronics, vol. 67, no. 4, pp. 341-349, Nov. 2021, doi:10.1109/TCE.2021.3120445.

A. V. Savchenko, L. V. Savchenko and I. Makarov, “Classifying Emotions and Engagement in Online Learning Based on a Single Facial Expression Recognition Neural Network,” in IEEE Transactions on Affective Computing, vol. 13, no. 4, pp. 2132-2143, 1 Oct. Dec.2022,doi:10.1109/TAFFC.2022.3188390.

X. Chen, D. Liu, Z. Xiong and Z. -J. Zha, “Learning and Fusing Multiple User Interest Representations for Micro-Video and Movie Recommendations,” in IEEE Transactions on Multimedia, vol. 23, pp. 484496, 2021, doi:10.1109/TMM.2020.2978618.

R. L. Rosa, G. M. Schwartz, W. V. Ruggiero and D. Z. Rodr´ıguez, “A Knowledge-Based Recommendation System That Includes Sentiment Analysis and Deep Learning,” in IEEE Transactions on Industrial Informatics, vol. 15, no. 4, pp. 2124-2135, April 2019, doi:10.1109/TII.2018.2867174.

C. Dalvi, M. Rathod, S. Patil, S. Gite and K. Kotecha, “A Survey of AI-Based Facial Emotion Recognition: Features, ML & DL Techniques, Age-Wise Datasets and Future Directions,” in IEEE Access, vol. 9, pp. 165806-165840, 2021, doi: 10.1109/ACCESS.2021.3131733

Y. Zhao and J. Zeng, “Library Intelligent Book Recommendation System Using Facial Expression Recognition,” 2020 9th International Congress on Advanced Applied Informatics (IIAI-AAI), Kitakyushu, Japan, 2020, pp. 55-58, doi: 10.1109/IIAI-AAI50415.2020.00021.

I. J. Goodfellow, D. Erhan, P. L. Carrier, A. Courville, M.Mirza, B. Hamner, W. Cukierski,Y. Tang, D. Thaler, and D. H. Lee, “Challenges in representation learning: A report on three machine learning contests,” Neural Networks, vol. 64, p. 59, 2015.

E. Barsoum, C. Zhang, C. C. Ferrer, and Z. Zhang, “Training deep networks for facial expression recognition with crowd-sourced label distribution,” in ACM International Conference on Multimodal Interaction, 2016, pp. 279–283.

Published
2024-07-15
How to Cite
Gonjari, S., Pawar, R., Pawar, R., Kshirsagar, S., & Kokare, R. (2024). Real Time Emotion Recognition and Classification for Diverse Suggestions using Deep Learning. ITEGAM-JETIA, 10(48), 14-20. https://doi.org/10.5935/jetia.v10i48.1007
Section
Articles