Open Access
SHS Web Conf.
Volume 194, 2024
The 6th ETLTC International Conference on ICT Integration in Technical Education (ETLTC2024)
Article Number 01001
Number of page(s) 12
Section Intelligent Applications in Society
Published online 26 June 2024
  1. Monira Islam, Tan Lee, “MEMD-HHT based Emotion Detection from EEG using 3D CNN”, July 11-15, 2022 [Google Scholar]
  2. HONGLI Zhang, “Expression-EEG Based Collaborative Multimodal Emotion Recognition Using Deep AutoEncoder”, September 7, 2020 [Google Scholar]
  3. GUOSHENG Yang, RUI Jiao, HUIPING Jiang, AND TING Zhang, “Ground Truth Dataset for EEG-Based Emotion Recognition With Visual Indication”, n October 13, 2020 [Google Scholar]
  4. MD.RABIUL Islam, (Member, IEEE), MOHAMMAD ALI Moni, MD.MILON Islam, “Emotion Recognition From EEG Signal Focusing on Deep Learning and Shallow Learning Techniques, June 22, 2021. [Google Scholar]
  5. Dahua Li, Jiayin Liu, Yi Yang, Fazheng Hou, Haotian Song, Yu Song, “Emotion Recognitionof Subjects With Hearing Impairment Based on Fusion of FacialExpression and EEG Topographic Map”, IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, VOL. 31, 2023. [Google Scholar]
  6. Salma Alhagry, Aly Aly Fahmy, Reda A. El-Khoribi, “Emotion Recognition based on EEG usingLSTM Recurrent Neural Network”, Vol. 8, No. 10, 2017 [Google Scholar]
  7. Saeed Turabzadeh, Hongying Meng, *ID, Rafiq M. Swash 1 ID, Matus Pleva 2 ID and Jozef Juhar, “Facial Expression Emotion Detection for Real-Time Embedded Systems”, 26 January 2018. [Google Scholar]
  8. Saket S Kulkarni1, Narender P Reddy*1 and SI Hariharan, “Facial expression (mood) recognition from facial images using committee neural networks”, 5 August 2009. [Google Scholar]
  9. J. A. Russell, A circumplex model of affect, Journal of Personality and Social Psychology, vol. 39, no. 6, pp. 1161, 1980. [CrossRef] [Google Scholar]
  10. Katie Moraes de Almondes1, Francisco Wilson Nogueira HolandaJúnior2, Maria Emanuela Matos Leonardo1 and Nelson Torro Alves3, “Facial Emotion Recognition andExecutive Functions”, 17 April, 2020. [Google Scholar]
  11. E. S. Salama, R. A. El-Khoribi, M. E. Shoman, and M. A. W. laby, EEG-based emotionrecognition using 3D convolutional neural networks, Int. J. Adv. Comput. Sci. Appl, vol. 9, no. 8, pp. 329–337, 2018. [Google Scholar]
  12. P. Gaur, R. B. Pachori, H. Wang, and G. Prasad, An automatic subject specific intrinsic mode function selection for enhancing two-class EEG-based motor imagery-brain computer interface, IEEE Sensors Journal, vol. 19, no. 16, pp. 6938–6947, 2019. [CrossRef] [Google Scholar]
  13. S. B. Wankhade and D. D. Doye, “Deep learning of empirical mean curve decomposition-wavelet decomposed EEG signal for emotion recognition,” Int. J. Uncertainty, Fuzziness Knowl.-Based Syst., vol. 28, no. 01, pp. 153–177, Feb. 2020 [CrossRef] [Google Scholar]
  14. S. Zhao, A. Gholaminejad, G. Ding, Y. Gao, J. Han, and K. Keutzer, “Personalized emotion recognition by personality-aware high-order learning of physiological signals,” ACM Trans. Multimedia Comput., Commun., Appl., vol. 15, no. 1s, pp. 1–18, Feb. 2019. [Google Scholar]
  15. S. Aydin, “Deep learning classification of neuroemotional phase domain complexity levels induced by affective video film clips,” IEEE J. Biomed. Health Informat., vol. 24, no. 6, pp. 1695–1702, Jun. 2020. [CrossRef] [Google Scholar]
  16. H. Kawano, A. Seo, Z. G. Doborjeh, N. Kasabov, and M. G. Doborjeh, “Analysis of similarity and differences in brain activities between perception and production of facial expressionsusing EEG data and the NeuCube spiking neural network architecture,” in Proc. Int. Conf. Neural Inf. Process., vol. 9950, 2016. [Google Scholar]
  17. Z. Liang et al., “EEGFuseNet: Hybrid unsupervised deep feature characterization and fusionfor high-dimensional EEG with an application to emotion recognition,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 29, pp. 1913–1925, 2021. [CrossRef] [Google Scholar]
  18. W. Huang, Y. Xue, L. Hu, and H. Liuli, S-EEGNet: Electroencephalgram Signal Classification Based on a Separable Convolution Neural Network With Bilinear Interpolation, IEEE Access, vol. 8, pp. 131636–131646, 2020. [CrossRef] [Google Scholar]
  19. E. S. Salama, R. A. El-Khoribi, M. E. Shoman, and M. A. W. Shalaby, A 3D-convolutional neural network framework with ensemble learning techniques for multi-modal emotion recognition, Egyptian Informatics Journal, vol. 22, no. 2, pp. 167–176, 2021. [CrossRef] [Google Scholar]
  20. J. Chen, H. Li, L. Ma, H. Bo, and X. Gao, Application of EEMDHHT method on EEG analysis for speech evoked emotion recognition, In 2020 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR), IEEE, 2020 [Google Scholar]
  21. A. Graves and J. Schmidhuber, “Framewise Phoneme Classification with Bidirectional LSTM and Other Neural Network Architectures,” Neural Networks, vol. 18, no. 5-6, pp. 602–610, June/July 2005. [CrossRef] [Google Scholar]
  22. V. Blanz, K. Scherbaum, and H. Seidel, “Fitting a morphable model to 3d scans of faces,” in Proceedings of International Conference on Computer Vision, 2007. [Google Scholar]
  23. I. Kotsia and I. Pitaa, “Facial expression recognition in image sequences using geometric deformation features and support vector machines,” IEEE Transaction On Image Processing, vol. 16, no. 1, 2007. [Google Scholar]
  24. P. Ekman, “Universals and cultural differences in facial expressions of emotion,” in Nebraska Symposium on Motivation 1971, J. Cole, Ed., vol. 19. Lincoln, NE: University of Nebraska Press, 1972, pp. 207–283 [Google Scholar]
  25. M. Schuster and K. K. Paliwal, “Bidirectional recurrent neural networks,” IEEE Transactions on Signal Processing, vol. 45, pp. 2673–2681, November 1997 [CrossRef] [Google Scholar]
  26. J. Cohn, A. Zlochower, J.-J. J. Lien, and T. Kanade, “Feature-point tracking by optical flow discriminates subtle differences in facial expression,” in Proceedings of the 3rd IEEE International Conference on Automatic Face and Gesture Recognition, April 1998, pp. 396–401 [Google Scholar]
  27. S. Romdhani, “Face image analysis using a multiple feature fitting strategy,” Ph.D. dissertation, University of Basel, Computer Science Department, Basel, CH, January 2005. [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.