Open Access
Issue
SHS Web Conf.
Volume 144, 2022
2022 International Conference on Science and Technology Ethics and Human Future (STEHF 2022)
Article Number 02006
Number of page(s) 5
Section Mobile Communication Technology and Prospects of Frontier Technology
DOI https://doi.org/10.1051/shsconf/202214402006
Published online 26 August 2022
  1. Krizhevsky, A., Sutskever, I. & Hinton, G. E. (2012). ImageNet Classification with Deep Convolutional Neural Networks. In F. Pereira, C. J. C. Burges, L. Bottou & K. Q. Weinberger (ed.), Advances in Neural Information Processing Systems 25 (pp. 1097--1105). Curran Associates, Inc.. [Google Scholar]
  2. Nair, V. & Hinton, G. E. (2010). Rectified Linear Units Improve Restricted Boltzmann Machines. In J. Fürnkranz & T. Joachims (eds.), Proceedings of the 27th International Conference on Machine Learning (ICML-10) (p./pp. 807-814). [Google Scholar]
  3. Dolezel, P., Skrabanek, P., & Gago, L. (2016). Weight Initialization Possibilities for Feedforward Neural Network with Linear Saturated Activation Functions. IFAC-PapersOnLine, Volume 49, Issue 25, 49-54. https://doi.org/10.1016/j.ifacol.2016.12.009. [CrossRef] [Google Scholar]
  4. Maas, A. L., Hannun, A. Y., & Ng, A. Y. (2013, June). Rectifier nonlinearities improve neural network acoustic models. In Proc. icml (Vol. 30, No. 1, p. 3). [Google Scholar]
  5. Clevert, D. A., Unterthiner, T., & Hochreiter, S. (2015). Fast and accurate deep network learning by exponential linear units (elus). arXiv preprint arXiv:1511.07289. [Google Scholar]
  6. He, K., Zhang, X., Ren, S., & Sun, J. (2015). Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision (pp. 1026-1034). [Google Scholar]
  7. Arpit, D., & Bengio, Y. (2019). The benefits of overparameterization at initialization in deep ReLU networks. arXiv preprint arXiv:1901.03611. [Google Scholar]
  8. Gulcehre, C., Moczulski, M., Denil, M., & Bengio, Y. (2016, June). Noisy activation functions. In International conference on machine learning (pp. 3059-3068). PMLR. [Google Scholar]
  9. Wang, T., Qin, Z., & Zhu, M. (2017, November). An ELU network with total variation for image denoising. In International Conference on Neural Information Processing (pp. 227-237). Springer, Cham. [Google Scholar]
  10. Maas, A. L., Hannun, A. Y., & Ng, A. Y. (2013, June). Rectifier nonlinearities improve neural network acoustic models. In Proc. icml (Vol. 30, No. 1, p. 3). [Google Scholar]
  11. Klambauer, G., Unterthiner, T., Mayr, A., & Hochreiter, S. (2017, December). Self-normalizing neural networks. In Proceedings of the 31st international conference on neural information processing systems (pp. 972-981). [Google Scholar]
  12. Zhang, G., & Li, H. (2018). Effectiveness of scaled exponentially-regularized linear units (SERLUs). arXiv preprint arXiv:1807.10117. [Google Scholar]
  13. Hendrycks, D., & Gimpel, K. (2016). Gaussian error linear units (gelus). arXiv preprint arXiv:1606.08415. [Google Scholar]
  14. Zeiler, M. D., Ranzato, M., Monga, R., Mao, M., Yang, K., Le, Q. V., ... & Hinton, G. E. (2013, May). On rectified linear units for speech processing. In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (pp. 3517-3521). IEEE. [Google Scholar]
  15. Dahl, G. E., Sainath, T. N., & Hinton, G. E. (2013, May). Improving deep neural networks for LVCSR using rectified linear units and dropout. In 2013 IEEE international conference on acoustics, speech and signal processing (pp. 8609-8613). IEEE. [Google Scholar]
  16. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., ... & Rabinovich, A. (2015). Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1-9). [Google Scholar]
  17. Wen, W., Wu, C., Wang, Y., Chen, Y., & Li, H. (2016). Learning structured sparsity in deep neural networks. Advances in neural information processing systems, 29, 2074-2082. [Google Scholar]
  18. Kim, H., Khan, M. U. K., & Kyung, C. M. (2019). Efficient neural network compression. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 12569-12577). [Google Scholar]
  19. Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25, 1097-1105. [Google Scholar]
  20. Wen, T. H., Gasic, M., Mrksic, N., Su, P. H., Vandyke, D., & Young, S. (2015). Semantically conditioned lstm-based natural language generation for spoken dialogue systems. arXiv preprint arXiv:1508.01745. [Google Scholar]
  21. Chorowski, J., Bahdanau, D., Serdyuk, D., Cho, K., & Bengio, Y. (2015). Attention-based models for speech recognition. arXiv preprint arXiv:1506.07503. [Google Scholar]
  22. Deng, L. (2012). The mnist database of handwritten digit images for machine learning research [best of the web]. IEEE Signal Processing Magazine, 29(6), 141-142. [CrossRef] [Google Scholar]
  23. Mount, J. (2011). The equivalence of logistic regression and maximum entropy models. URL: http://www.win-vector.com/dfiles/LogisticRegressionMaxEnt.pdf. [Google Scholar]
  24. Lawrence, S., Giles, C. L., Tsoi, A. C., & Back, A. D. (1997). Face recognition: A convolutional neuralnetwork approach. IEEE transactions on neural networks, 8(1), 98-113. [CrossRef] [Google Scholar]
  25. Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. MIT press. [Google Scholar]
  26. LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. nature, 521(7553), 436-444. [Google Scholar]
  27. Lin, G., & Shen, W. (2018). Research on convolutional neural network based on improved Relu piecewise activation function. Procedia computer science, 131, 977-984. [CrossRef] [Google Scholar]
  28. Xu, B., Wang, N., Chen, T., & Li, M. (2015). Empirical evaluation of rectified activations in convolutional network. arXiv preprint arXiv:1505.00853. [Google Scholar]
  29. QingJie, W., & WenBin, W. (2017, June). Research on image retrieval using deep convolutional neural network combining L1 regularization and PRelu activation function. In IOP Conference Series: Earth and Environmental Science (Vol. 69, No. 1, p. 012156). IOP Publishing. [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.