Issue |
SHS Web Conf.
Volume 194, 2024
The 6th ETLTC International Conference on ICT Integration in Technical Education (ETLTC2024)
|
|
---|---|---|
Article Number | 01001 | |
Number of page(s) | 12 | |
Section | Intelligent Applications in Society | |
DOI | https://doi.org/10.1051/shsconf/202419401001 | |
Published online | 26 June 2024 |
Human Behaviour Analysis Using CNN
MIT-ADT University MIT ADT Campus, Rajbaugh Loni Kalbhor – 412201 +91 9764922888
* Corresponding Author: Dr. Anupama Budhewar Email: anupama.budhewar@mituniversity.edu.in
Emotion recognition has been the subject of extensive research due to its significant impact on various domains, including healthcare, human-computer interaction, and marketing. Traditional methods of emotion recognition rely on visual cues, such as facial expressions, to decipher emotional states. However, these methods often fall short when dealing with individuals who have limited ability to express emotions through facial expressions, such as individuals with certain neurological disorders.
This research paper proposes a novel approach to emotion recognition by combining facial expression analysis with electroencephalography (EEG) data. Deep learning techniques are applied to extract features from facial expressions captured through video analysis, while simultaneously analyzing the corresponding EEG signals. The goal is to improve emotion recognition accuracy by utilizing the complementary information offered by the interaction between facial expressions and EEG data.
Emotion recognition is a challenging task that has collected considerable recognition in the current years. Different and refined approaches to recognize emotions based on facial expressions, voice analysis, physiological signals, and behavioral patterns have been developed. While facial expression analysis has been a dominant approach, it falls short in instances where individuals cannot effectively express emotions through their faces. To overcome these limitations, there is a need to explore alternative methods that can provide a more accurate assessment of emotions. This research paper aims to investigate the collaboration and interaction between facial expressions and EEG data for emotion recognition. By combining the information from both modalities, it is expected to augment the accuracy and strength of emotion recognition systems. The proposed method can range from conducting literature reviews to designing and fine-tuning deep learning models for feature extraction, developing fusion models to combine features from facial expressions and EEG data, performing experimentation and evaluation, writing papers and documentation, preparing presentations for dissemination, and engaging in regular meetings and discussions for effective collaboration. Ethical considerations, robustness and generalizability, continual learning and skill development, and utilizing collaboration tools and platforms are also essential contributions to ensure the project’s success.
Key words: Affective Computing / Multimodal Emotion Recognition / Facial Expression Analysis / EEG Data Processing / Deep Learning Models / Feature Extraction / Real-time Emotion Detection / Individual Differences / Personalized Emotional Profiling / HumanComputer Interaction / Collaborative Environments / Mental Health Applications / Ethical Considerations / Privacy Protection / Informed Consent / Machine Learning Algorithms / Model Training and Optimization / Algorithmic Approaches / Virtual Reality Integration / Neuro-feedback Systems
© The Authors, published by EDP Sciences, 2024
This is an Open Access article distributed under the terms of the Creative Commons Attribution License 4.0, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.