Open Access
Issue
SHS Web Conf.
Volume 147, 2022
SCAN’22 - 10e Séminaire de Conception Architecturale Numérique
Article Number 06003
Number of page(s) 14
Section Paradigmes II
DOI https://doi.org/10.1051/shsconf/202214706003
Published online 12 October 2022
  1. R. Abdal, Y. Qin, P. Wonka, Image2StyleGAN++: How to Edit the Embedded Images ? (KAUST University Research, 2020) [Google Scholar]
  2. E. Ahmed, A. Saint, A. Shabayek, K. Sherenkova, R. Das, G. Gusev et D. Aouada, A survey on Deep Learning Advances on Different 3D Data Representations (University of Luxembourg, 2019) [Google Scholar]
  3. M. Arjovsky, S. Chintala et L. Bottou, Wasserstein GAN. ArXiv: 1701.07875 [Cs, Stat] (2017) [Google Scholar]
  4. M. Bachl et D.C. Ferreira, City-GAN: Learning architectural styles using a custom Conditional GAN architecture, arXiv:1907.05280v2 [cs.CV] (2020) [Google Scholar]
  5. M. Bronstein, J. Bruna, T. Cohen, P. Velickovic, Geometric Deep Learning : Grids, Groups, Graphs, Geodesics, and Gauges (2021) [Google Scholar]
  6. S. Chaillou, AI + Architecture - Towards a New Approach, Thesis of Harvard University (2019) [Google Scholar]
  7. C. Donahue, Z.C. Lipton, A. Balsubramani et J. McAuley, Semantically decomposing the latent spaces of generative adversarial networks, ICLR (2018) [Google Scholar]
  8. L.A. Gatys, L. Ecker, M. Bethge, A Neural Algorithm of Artistic Style, arXiv:1508.06576v2 [cs.CV] (2015) [Google Scholar]
  9. J.S. Géro, Computational models of creative designing based on situated cognition, Creativity and Cognition, ACMPress, pp. 3–10 (2002) [Google Scholar]
  10. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, Y. Bengio, Generative Adversarial Networks, Advances in Neural Information Processing Systems 27 (2014) [Google Scholar]
  11. E. Härkönen, A. Hertzmann, J. Lehtinen, S. Paris, GANSpace: Discovering Interpretable GAN Controls (2020) [Google Scholar]
  12. Y. Hong, U. Hwang, J. Yoo et S. Yoon, How Generative Adversarial Networks and Their Variants Work : An Overview, ACM Computing Surveys (2019) [Google Scholar]
  13. W. Huang, H. Zheng, Architectural drawings recognition and generation through machine learning, in: Recalibration Imprecision Infidelity - Proc. 38th Annu. Conf. Assoc. Comput. Aided Des. Archit. ACADIA 2018, pp. 156–165 (2018) [Google Scholar]
  14. P. Isola, J.Y. Zhu, T. Zhou, A. Efros, Conditional Image-to-Image Translation with Conditional Adversarial Networks, Proc. CVPR 2017 (2018) [Google Scholar]
  15. T. Karras, S. Laine, T. Aila, J. Lehtinen, Progressive growing of GANs for improved quality, stability and variation, ICLR (2018) [Google Scholar]
  16. T. Karras, S. Laine, T. Aila, A Style-Based Generator Architecture for Generative Adversarial Networks, Proc. CVPR (2019) [Google Scholar]
  17. T. Karras, M. Aittala, J. Hellsten, S. Laine, J. Lehtinen and T. Aila, Training Generative Adversarial Networks with Limited Data, NeurIPS (2020) [Google Scholar]
  18. T. Karras, M. Aittala, J. Hellsten, S. Laine, E. Harkonen, J. Lehtinen and T. Aila, Alias-Free Generative Adversarial Networks, arXiv:2106.12423v1 (2021) [Google Scholar]
  19. Y. LeCun, Qu ’est-ce que l'intelligence artificielle (Collège de France, 2016) [Google Scholar]
  20. H. Ling, K. Kreis, D. Li, S.W. Kim, A. Torralba et S. Fidler, EditGAN: High-Precision Semantic Image Editing, arXiv:2111.03186v1 [cs.CV] (2021) [Google Scholar]
  21. H. Liu, J. Zhang, J. Zhu et S.C.H. Hoi, DeepFacade: a deep learning approach to facade parsing (2017) [Google Scholar]
  22. N. Nauata, S. Hosseini, K.H. Chang, H. Chu, C.Y. Cheng et Y. Furukawa, House-GAN++ : Generative Adversarial Layout Refinement Network towards Intelligent Computational Agent for Professional Architects, CVPR (2021) [Google Scholar]
  23. D. Newton, Generative Deep Learning in Architectural Design, Technol. Archit. + Des., vol. 3, no. 2, pp. 176–189 (2019) [Google Scholar]
  24. X. Pan, B. Dai, Z. Liu, C.C. Loy et P. Luo, Do 2D GANs know 3D shape ? Unsupervised 3D shape reconstruction from 2D image GANs, ICLR (2021) [Google Scholar]
  25. T. Park, M.Y. Liu, T.C. Wang et J.Y. Zhu, Semantic Image Synthesis with Spatially-Adaptive Normalization, NVIDIA Research (2019) [Google Scholar]
  26. Radford, L. et Metz, S. Chintala, Unsupervised representation learning with deep convolutional GANs, ICLR (2016) [Google Scholar]
  27. J. Schmidhuber. Deep Learning in Neural Networks : An Overview. IDSIA Switzerland (2014) [Google Scholar]
  28. E. Schönfeld, V. Sushko, D. Zhang, J. Gall et B. Schiele, A. Khoreva, You only need adversarial supervision for semantic image synthesis, Proc. ICLR (2021) [Google Scholar]
  29. Y. Shen et B. Zhou, Closed-Form Factorization of Latent Semantics in GANs, Proc. CVPR (2021) [Google Scholar]
  30. J. Silvestre, Y. Ykeda et F. Guena, Artificial imagination of architecture with deep convolutional neural network, Proc. CAADRIA, pp. 881–890 (2016) [CrossRef] [Google Scholar]
  31. Tov, Y. Alaluf, Y. Nitzan, O. Patashnik et D. Cohen-Or, Designing an Encoder for StyleGAN Image Manipulation, arXiv:2102.02766v1 [cs.CV] (2021) [Google Scholar]
  32. T.C. Wang, M.Y. Liu, J.Y. Zhu, A. Tao, J. Kautz et B. Catanzaro, HighResolution Image Synthesis and Semantic Manipulation with Conditional GANs, Proc. CVPR (2018) [Google Scholar]
  33. Z. Wu, D. Lischinski et E. Shechtman, StyleSpace Analysis: Disentangled Controls for StyleGAN Image Generation, arXiv:2011.12799v2 [cs.CV] (2020) [Google Scholar]
  34. W. Xia, Y. Zhang, Y. Yang, J.H. Xue, B. Zhou et M.H. Yang, 2021, GAN Inversion : a Survey, Proc. CVPR (2021) [Google Scholar]
  35. H. Zhang, 3D Model Generation on Architectural Plan and Section Training through Machine Learning, Technologies, 7, 82; doi: 10.3390/technologies7040082 (2019) [Google Scholar]
  36. J.Y. Zhu, T. Park, P. Isola et A. Efros, Unpaired image-to-image translation using Cycle-Consistent Adversarial Networks (2018) [Google Scholar]
  37. J. Zhu, Y. Shen, D. Zhao, B. Zhou, In-Domain GAN Inversion for Real Image Editing (2020) [Google Scholar]
  38. P. Zhu, R. Abdal, Y. Qin, J. Femiani et P. Wonka, Improved StyleGAN Embedding: Where are the Good Latents? (2021) [Google Scholar]
  39. D. Berthelot, T. Schumm et L. Metz, BEGAN: Boundary Equilibrium Generative Adversarial Networks. arXiv:1703.10717v4 [cs.LG] (2017) [Google Scholar]
  40. Still, M. D’Inverno, A History of Creativity for Future AI Research, Proceedings of the Seventh International Conference on Computational Creativity (2016) [Google Scholar]
  41. Elgamma, B. Liu, M. Elhoseiny et M. Mazzone, CAN: Creative Adversarial Networks Generating “Art” by Learning About Styles and Deviating from Style Norms, Eighth International Conference on Computational Creativity (ICCC), Atlanta (2017) [Google Scholar]
  42. S. Colton. Creativity versus the perception of creativity in computational systems. In AAAI spring symposium: creative intelligent systems, volume 8 (2008) [Google Scholar]
  43. Whitehead, Process and Reality (New York: The Free Press, 1978) [Google Scholar]
  44. M. Negrotti, M. Alternative intelligence. In Negrotti ed., Understanding the Artificial : On the Future Shape of Artificial Intelligence, 55–75. Londres: Springer-Verlag (1991) [CrossRef] [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.