March 18th - 20th, 09:00 - 05:00
Instructors: George Guida, Daniel Escobar, Carlos Navarro
The recent convergence in artificial intelligence of text and image processing has taken the world by storm. Text-to-image tools such as DALL-E, Midjourney, or Stable Diffusion can now generate infinite high-quality images in short inference times. These bring forward questions such as: how will generative AI affect the AEC industry?; and how will these tools be integrated into our existing workflows of 3D form generation? This workshop extends beyond the capabilities of image-based tools towards the creation, curation, and manipulation of synthetic 3D forms. We will introduce novel techniques for text-to-3D synthesis through an approach that uses diffusion models trained on billions of image-text pairs adapted to create 3D Neural Radiance Fields or NeRFs. This will be followed by a phase of geometry manipulations and interpolations across quasi-architectural forms and material-driven constraints. Through this process of form-finding, a new paradigm of blending digital media with physical reality emerges, bringing forward new material expressions and architectural hybrids. This workshop brings forward the role of agency and authorship within the creative process, especially important with the rise of lawsuits for misused intellectual property within the training datasets of these models. Our agency as architects must be asserted through a process of curation informed by professional knowledge - of datasets, models, inputs, and outputs- navigating across a feedback loop of information between human intuition and trained AI models. Students will use a hands-on approach to hybridize and create new 3D forms and prototypes from personalized architectural datasets and content. Following an introduction to ML applications architectural design, students will develop three sequential exercises related to 1) form making using neural radiance fields; 2) form remixing with customized text-to-image processes; 3) form curating of 3D architectural interventions by manipulating the first two steps. The course thus engages ways to reposition language within the design process, ultimately challenging current cultural practices of architectural production and consumption.
Date: 18th – 20th March
Duration : 3 hours/day
Day 1 - 2.00 pm - 5.00 pm Eastern Standard Time (EST)
Day 2 - 2.00 pm - 5.00 pm Eastern Standard Time (EST)
Day 3 - 9.00 am - 12.00 am Eastern Standard Time (EST)
Mode of Teaching: 8 - 20
Carlos Navarro holds a Master of Design Research from the Southern California Institute of Architecture (SCI-Arc), and a Diploma of Architect from the Pontifical Catholic University of Peru. He is an architectural designer with a work background from firms in Los Angeles and Lima, including OFFICEUNTITLED, Steinberg Hart, P-A-T-T-E-R-N-S, and MASUNOSTUDIO. His work operates blurring established boundaries between the architectural discipline and computer science, generative art, interactive design, and mixed reality. Parallel to his practice, he is constantly involved in architectural technology research through teaching and academic collaborations. He has taught at Universidad de Los Andes, Bogota, and run workshops at DesignMorphine and Harvard Graduate School of Design (GSD). He has also participated as a speaker and run workshops related to the field of AI at renowned Computer-Aided Architectural Design Conferences, such as CAADRIA 2022, DigitalFUTURES 2022, DigitalFUTURES Talks & Tutorials, and the SHARE Bucharest 2022 Forum.
Daniel Escobar holds MSc of Computer Science from Georgia Tech and BSc of Architecture from NYIT. He is an architectural and creative designer with experience in different architectural scales and media. He co-founded a design studio OLA, which seeks to explore contemporary technologies and their use in the generation of media including architectural space. He also builds an online platform diffusionarchitecture.com where AI-creating architecture is curated, alongside with dissemination of 3D AI research and interdisciplinary workshops. He has been published in ACADIA 2021 for work on using generative text-to-image AI. He has taught multiple workshops and given presentations using AI for creative design. With OLA, he has won a funding award to develop a virtual tower design in collaboration with AI.
George Guida is a research associate at the Harvard Laboratory for Design Technologies, and co-founder of the design practice ArchiTAG. His primary research focuses on synthesizing design and technology using machine learning, generative design, and immersive mixed realities. He has conducted conference workshops on topics relating to creative applications of multimodal AI in DigitalFUTURES, SIGraDi, and at the Harvard Graduate School of Design. He has lectured internationally at the Share-Architects Forum, the Chinese University of Hong Kong, and the Rhode Island School of Design. He has worked as an ARB RIBA architect and LEED AP at Foster + Partners, Certain Measures, and the MIT Media Lab, and has completed his graduate studies at the Harvard GSD.