Authors :
Harshith Manoharan; Keerthana R E; N. Selvaganesh; Logeswari P
Volume/Issue :
Volume 10 - 2025, Issue 5 - May
Google Scholar :
https://tinyurl.com/bddwjed6
DOI :
https://doi.org/10.38124/ijisrt/25may894
Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.
Abstract :
The growing need for accessible mental health support highlights the importance of innovative digital solutions.
Many individuals struggle to manage emotions, leading to heightened stress, anxiety, and a decline in well-being. Traditional
methods like journaling or therapy, while beneficial, can often feel time-consuming, intimidating, or inaccessible. Current
mental health apps frequently fall short, lacking emotional analysis. There is a rising demand for a non-intrusive, user-
friendly solution can monitor emotions and provide meaningful insights outside conventional therapy. Mind Your Mind
addresses the gap with a voice-based journaling system powered by emotional analysis. Using advanced speech processing,
the platform evaluates tone, pitch, and sentiment to assess emotional states as users speak naturally by employing an AI-
driven emotion recognition model, integrating Mel- Frequency Cepstral Coefficients (MFCC), Mel-Spectrograms, and
Convolutional Neural Networks (CNNs) for accurate pattern recognition. The model achieves an accuracy of 92.3%,
enabling reliable emotional detection. Users interact through an intuitive web interface, recording their thoughts and
receiving immediate, actionable mood insights in textual format.
Keywords :
Emotion Recognition, Mental Health, Speech Analysis, Voice Journaling.
References :
- Barhoumi, Chawki, and Yassine BenAyed. "Real-time speech emotion recognition using deep learning and data augmentation." Artificial Intelligence Review 58.2 (2024): 49.
- Sayis, Batuhan, and Hatice Gunes. "Technology-assisted journal writing for improving student mental wellbeing: Humanoid robot vs. voice assistant." Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction.
- Chawla, Shreya, and Sneha Saha. "Exploring perceptions of psychology students in Delhi-NCR Region towards using mental health apps to promote resilience: a qualitative study." BMC Public Health 24.1 (2024): 2000.
- Olawade, David B., et al. "Enhancing mental health with Artificial Intelligence: Current trends and future prospects." Journal of Medicine, Surgery, and Public Health (2024): 100099.
- Liu, Z. "Online hate speech on Twitter from the perspective of pragmatics." International Journal of Social Sciences and Public Administration 4.1 (2024): 322-326.
- Simmons, Natalie, Lewis Goodings, and Ian Tucker. "Experiences of using mental health Apps to support psychological health and wellbeing." Journal of Applied Social Science 18.1 (2024): 32–44.
- Aipenova, Aziza, and Seitmukhanova Almira. "Mental Health in the Digital Age: Balancing Connectivity and Well-Being." Journal of Spirituality in Mental Health. 7 (2024).
- Hamdoun, Salah, et al. "AI-based and digital mental health apps: Balancing need and risk." IEEE Technology and Society Magazine 42.1 (2023): 25–36.
- Tucker, Ian, Katherine Easton, and Rebecca Prestwood. "Digital community assets: Investigating the impact of online engagement with arts and peer support groups on mental health during COVID‐19." Sociology of Health & Illness 45.3 (2023): 666–683.
- Pan, Jiahui, et al. "Multimodal emotion recognition based on facial expressions, speech, and EEG." IEEE Open Journal of Engineering in Medicine and Biology (2023).
- Chamishka, Sadil, et al. "A voice-based real-time emotion detection technique using recurrent neural network empowered feature modelling." Multimedia Tools and Applications 81.24 (2022): 35173–35194.
- Ravuri, Vinesh, Ricardo Gutierrez-Osuna, and Theodora Chaspari. "Preserving mental health information in speech anonymization." 2022 10th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW). IEEE, 2022.
- Andayani, Felicia, et al. "Hybrid LSTM-transformer model for emotion recognition from speech audio files." IEEE Access 10 (2022): 36018–36027.
- Mertens, Esther CA, et al. "Parallel changes in positive youth development and self-awareness: The role of emotional self-regulation, self-esteem, and self-reflection." Prevention Science 23.4 (2022): 502–512.
- Kakuba, Samuel, Alwin Poulose, and Dong Seog Han. "Deep learning-based speech emotion recognition using multi-level fusion of concurrent features." IEEE Access 10 (2022): 125538–125551.
- Sun, Licai, et al. "Multimodal cross-and self-attention network for speech emotion recognition." ICASSP 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2021.
- Muppidi, Aneesh, and Martin Radfar. "Speech emotion recognition using quaternion convolutional neural networks." ICASSP 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2021.
- Sajjad, Muhammad, and Soonil Kwon. "Clustering-based speech emotion recognition by incorporating learned features and deep BiLSTM." IEEE access 8 (2020): 79861-79875.
- Zhang, Jianhua, et al. "Emotion recognition using multi-modal data and machine learning techniques: A tutorial and review." Information fusion 59 (2020): 103-126.
- Yao, Zengwei, et al. "Speech emotion recognition using fusion of three multi-task learning-based classifiers: HSF-DNN, MS-CNN and LLD-RNN." Speech Communication 120 (2020): 11-19.
- Nasri, M. A., et al. "Face emotion recognition from static image based on convolution neural networks." 2020 5th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP). IEEE, 2020.
- Mellouk, Wafa, and Wahida Handouzi. "Facial emotion recognition using deep learning: review and insights." Procedia Computer Science 175 (2020): 689–694.
- Twenge, Jean M., et al. "Age, period, and cohort trends in mood disorder indicators and suicide-related outcomes in a nationally representative dataset, 2005–2017." Journal of Abnormal Psychology 128.3 (2019): 185.
- Mirsamadi, Seyedmahdad, Emad Barsoum, and Cha Zhang. "Automatic speech emotion recognition using recurrent neural networks with local attention." 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2017.
- Huang, Yongrui, et al. "Fusion of facial expressions and EEG for multimodal emotion recognition." Computational Intelligence and Neuroscience 2017.1 (2017): 2107451.
The growing need for accessible mental health support highlights the importance of innovative digital solutions.
Many individuals struggle to manage emotions, leading to heightened stress, anxiety, and a decline in well-being. Traditional
methods like journaling or therapy, while beneficial, can often feel time-consuming, intimidating, or inaccessible. Current
mental health apps frequently fall short, lacking emotional analysis. There is a rising demand for a non-intrusive, user-
friendly solution can monitor emotions and provide meaningful insights outside conventional therapy. Mind Your Mind
addresses the gap with a voice-based journaling system powered by emotional analysis. Using advanced speech processing,
the platform evaluates tone, pitch, and sentiment to assess emotional states as users speak naturally by employing an AI-
driven emotion recognition model, integrating Mel- Frequency Cepstral Coefficients (MFCC), Mel-Spectrograms, and
Convolutional Neural Networks (CNNs) for accurate pattern recognition. The model achieves an accuracy of 92.3%,
enabling reliable emotional detection. Users interact through an intuitive web interface, recording their thoughts and
receiving immediate, actionable mood insights in textual format.
Keywords :
Emotion Recognition, Mental Health, Speech Analysis, Voice Journaling.