Open Access
Issue |
SHS Web Conf.
Volume 214, 2025
CIFEM’2024 - 4e édition du Colloque International sur la Formation et l’Enseignement des Mathématiques et des Sciences & Techniques
|
|
---|---|---|
Article Number | 01001 | |
Number of page(s) | 12 | |
DOI | https://doi.org/10.1051/shsconf/202521401001 | |
Published online | 28 March 2025 |
- A. Peña-Ayala, “Educational data mining: A survey and a data mining-based analysis of recent works,” Expert Systems with Applications, vol. 41, no. 4, pp. 1432–1462, Mar. 2014, DOI: 10.1016/j.eswa.2013.08.042. [Google Scholar]
- H. Taherdoost, “Deep Learning and Neural Networks: Decision-Making Implications,” Symmetry, vol. 15, no. 9, Art. no. 9, Sep. 2023, DOI: 10.3390/sym15091723. [Google Scholar]
- Y. Dong, H. Su, J. Zhu, and B. Zhang, “Improving Interpretability of Deep Neural Networks with Semantic Information,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI: IEEE, Jul. 2017, pp. 975–983. DOI: 10.1109/CVPR.2017.110. [Google Scholar]
- S. Chakraborty et al., “Interpretability of deep learning models: A survey of results,” in 2017 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computed, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI), San Francisco, CA: IEEE, Aug. 2017, pp. 1–6. DOI: 10.1109/UIC-ATC.2017.8397411. [Google Scholar]
- H. Kuwajima, M. Tanaka, and M. Okutomi, “Improving transparency of deep neural inference process,” Prog Artif Intell, vol. 8, no. 2, pp. 273–285, Jun. 2019, DOI: 10.1007/s13748-019-00179-x. [Google Scholar]
- M. Vega García and J. L. Aznarte, “Shapley additive explanations for NO2 forecasting,” Ecological Informatics, vol. 56, p. 101039, Mar. 2020, DOI: 10.1016/j.ecoinf.2019.101039. [Google Scholar]
- A. Barredo Arrieta et al., “Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI,” Information Fusion, vol. 58, pp. 82–115, Jun. 2020, DOI: 10.1016/j.inffus.2019.12.012. [CrossRef] [Google Scholar]
- A. Adadi and M. Berrada, “Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI),” IEEE Access, vol. 6, pp. 52138–52160, 2018, DOI: 10.1109/ACCESS.2018.2870052. [Google Scholar]
- M. M. Taye, “Understanding of Machine Learning with Deep Learning: Architectures, Workflow, Applications and Future Directions,” Computers, vol. 12, no. 5, p. 91, Apr. 2023, DOI: 10.3390/computers12050091. [CrossRef] [Google Scholar]
- C. Janiesch, P. Zschech, and K. Heinrich, “Machine learning and deep learning,” Electron Markets, vol. 31, no. 3, pp. 685–695, Sep. 2021, DOI: 10.1007/s12525-021-00475-2. [Google Scholar]
- M. T. Ribeiro, S. Singh, and C. Guestrin, “‘Why Should I Trust You?’: Explaining the Predictions of Any Classifier,” in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco California USA: ACM, Aug. 2016, pp. 1135–1144. DOI: 10.1145/2939672.2939778. [Google Scholar]
- R. Guidotti, A. Monreale, S. Ruggieri, F. Turini, F. Giannotti, and D. Pedreschi, “A Survey of Methods for Explaining Black Box Models,” ACM Comput. Surv., vol. 51, no. 5, pp. 1–42, Sep. 2019, DOI: 10.1145/3236009. [Google Scholar]
- F.-L. Fan, J. Xiong, M. Li, and G. Wang, “On Interpretability of Artificial Neural Networks: A Survey,” IEEE Trans. Radiat. Plasma Med. Sci., vol. 5, no. 6, pp. 741–760, Nov. 2021, DOI: 10.1109/TRPMS.2021.3066428. [Google Scholar]
- S. Lundberg and S.-I. Lee, “A Unified Approach to Interpreting Model Predictions,” 25, 2017, arXiv: arXiv:1404.1100. DOI: 10.48550/arXiv.1705.07874. [Google Scholar]
- N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: a simple way to prevent neural networks from overfitting,” Journal of Machine Learning Research, vol. 15, no. 1, pp. 1929–1958, Jan. 2014, doi: https://dl.acm.org/doi/10.5555/2627435.2670313. [Google Scholar]
- C. Garbin, X. Zhu, and O. Marques, “Dropout vs. batch normalization: an empirical study of their impact to deep learning,” Multimed Tools Appl, vol. 79, no. 19-20, pp. 12777–12815, Ma. 2020, DOI: 10.1007/s11042-019-08453-9. [Google Scholar]
- L. Prechelt, “Early Stopping - But When?,” in Neural Networks: Tricks of the Trade, G. B. Orr and K.-R. Müller, Eds., Berlin, Heidelberg: Springer, 1998, pp. 55–69. DOI: 10.1007/3-540-49430-8_3. [Google Scholar]
- J. Bjorck, C. Gomes, B. Selman, and K. Q. Weinberger, “Understanding Batch Normalization,” 30, 2018, arXiv: arXiv:1404.1100. DOI: 10.48550/arXiv.1806.02375. [Google Scholar]
- S. Santurkar, D. Tsipras, A. Ilyas, and A. Madry, “How Does Batch Normalization Help Optimization?,” Apr. 15, 2019, arXiv: arXiv:1404.1100. Accessed: Oct. 16, 2024. [Online]. Available: http://arxiv.org/abs/1805.11604 [Google Scholar]
- S. Ioffe and C. Szegedy, “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift,” Mar. 02, 2015, arXiv: arXiv:1404.1100. Accessed: Oct. 16, 2024. [Online]. Available: http://arxiv.org/abs/1502.03167 [Google Scholar]
- V. Vishwarupe, P. M. Joshi, N. Mathias, S. Maheshwari, S. Mhaisalkar, and V. Pawar, “Explainable AI and Interpretable Machine Learning: A Case Study in Perspective,” Procedía Computer Science, vol. 204, pp. 869–876, 2022, DOI: 10.1016/j.procs.2022.08.105. [Google Scholar]
- F. Oviedo, J. L. Ferres, T. Buonassisi, and K. T. Butler, “Interpretable and Explainable Machine Learning for Materials Science and Chemistry,” Acc. Mater. Res., vol. 3, no. 6, pp. 597–607, Jun. 2022, DOI: 10.1021/accountsmr.1c00244. [Google Scholar]
- R. Marcinkevičs and J. E. Vogt, “Interpretable and explainable machine learning: A methods-centric overview with concrete examples,” WIREs Data Mining and Knowledge Discovery, vol. 13, no. 3, p. e1493, 2023, DOI: 10.1002/widm.1493. [Google Scholar]
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.