Sculpture Synergy with VR explores the integration of VR sculpting with AI-powered creativity, advanced texturing techniques, and professional-grade video production.
Sculpture Synergy with VR explores the integration of VR sculpting with AI-powered creativity, advanced texturing techniques, and professional-grade video production.
Sculpture Synergy with VR explores the integration of VR sculpting with AI-powered creativity, advanced texturing techniques, and professional-grade video production.
The Diffusion Architect 2.0 empowers participants to use advanced AI models like SDXL and Flux to transform architectural design and unlock creativity.
In this workshop, we will go through all the needed knowledge to enable you to harness the power of AI image diffusion models for practical architecture design workflows. After completing this workshop, you will be able to use these models as design assistants that can add quality to your project narratives and help you come up with fresh ideas through all stages of design. We will be going through the basics of how these models were trained, how they function, and how to use them either locally on your machine or on a web service. Then, we will be focusing more on advanced workflows and combinations of models that could specifically enhance architecture design processes so you can use them in your day-to-day tasks as an architect.
The workshop will involve two of the best state-of-the-art models: SDXL and Flux, where we will be explaining the best use cases for each. It will involve an array of User Interfaces to run them, like Automatic1111, ComfyUI, Fooocus, and more. We will be combining these models with ControlNets and custom-trained LoRA models to guide toward specific compositions and aesthetics.
After attending this workshop, you will be able to create and share your own custom workflows. We will be touching on many subjects such as text to image, image to image, Rhino to image, Revit to image, Enscape to image, In-context generation, image referencing, regional conditioning, inpainting, upscaling, and more. This will be the complete package for you to start designing with AI diffusion models.
What You’ll Learn?
Understand the fundamentals of AI diffusion models and their role in architectural design workflows.
Learn to use SDXL and Flux models effectively for high-quality and innovative design outputs.
Master tools like Automatic1111, ComfyUI, and Fooocus for running AI-powered workflows.
Explore advanced techniques such as ControlNets, LoRA models, inpainting, and upscaling.
Build and implement custom AI workflows tailored to your architectural projects.
Methodology:
This workshop adopts a hands-on, practical approach to mastering AI diffusion models for architectural design. Starting with a deep dive into foundational concepts, including how these models are trained. Participants will then explore advanced tools like Automatic1111, ComfyUI, and Fooocus to run these models effectively.
Building on this foundation, the workshop will focus on combining state-of-the-art models like SDXL and Flux with advanced techniques such as ControlNets and LoRA models. You will engage in practical exercises to apply workflows like text-to-image, image referencing, Rhino-to-image, and in-context generation alongside advanced methods like inpainting and upscaling.
By the end, you’ll create custom workflows designed to enhance creativity and efficiency in your architectural projects.
Stable Diffusion and ControlNet (How to install SD and ControlNet). Alternatively, participants can use Stable Diffusion on a server relying on RunDiffusion, a platform focused on the Stable Diffusion model. RunDiffusion charges based on server rental time and the number of models used (it supports ControlNet).
Content
Intro + PAACADEMY Updates (08:49)
Introduction to the workshop - PAACADEMY Updates
Presentation (36:54)
Overview of AI-generated images
This presentation focuses on providing an introductory overview of diffusion models, their architecture, and their practical use in generating images for architectural workflows using Stable Diffusion and fine-tuned models.
Text-to-Image Explained (47:05)
Generating images through prompt engineering
This session focuses on the step-by-step process of generating images from text prompts using diffusion models, exploring settings like sampling methods, steps, control weights, and seed values to achieve desired outcomes.
Revit Models to Images (24:33)
Transforming masses into architectural visuals
This session focuses on converting Revit massing models into detailed architectural images using control nets, text prompts, and Loras to explore iterative design workflows.
Advanced Image Control and Style Workflow (49:04)
Combining prompts and style control
This segment introduces a workflow that generates detailed and stylistically coherent architectural visuals by combining text prompts, image-to-image processing, control nets, and style conditioning through image prompts.
This session focuses on leveraging ControlNet to enhance image generation workflows by incorporating reference images, depth maps, and edge detection for precise and creative outputs.
Advanced AI Image Refinement (45:00)
Iterative workflows for architectural visuals
This session focuses on refining architectural images through iterative workflows, including patching designs in Photoshop, inpainting specific areas, and upscaling to achieve polished, high-resolution results.
Advanced Inpainting Techniques Overview (37:06)
Refining images using Stable Diffusion
This session focuses on refining architectural images using AI tools, specifically Stable Diffusion's Automatic1111. Techniques include inpainting, applying control maps such as depth and canny, and optimizing details to enhance visual fidelity.
Inpainting & Video Creation (34:39)
Creation Refine images, enhance details, animate
This session focuses on refining architectural images using in-painting techniques, enhancing details, and creating videos with AI tools.
LoRAs for Fine-Tuning (26:56)
Fine-tuning for specialized creativity.
This presentation covers LoRAs for fine-tuning diffusion models, enabling the addition of specific styles or concepts and enhancing architectural creativity.
Flux Revolution in Diffusion (09:59)
Flux's cutting-edge advancements in diffusion models
This presentation explores the advancements brought by Flux, a new diffusion model offering superior natural language comprehension, enhanced fine-tuning capabilities, and unparalleled detail generation.
ComfyUI x Flux workflows (48:04)
Advanced workflows, mixing, creative exploration
This section demonstrates ComfyUI's advanced workflows with Flux models, showcasing text-to-image, image-to-image, upscaling, and multi-image mixing to explore creative possibilities.
Advanced Workflow Insights (41:47)
Detailed Q&A on AI tools
This session delved into advanced Flux workflows, fine-tuning methods, and practical applications of AI tools, with extensive Q&A on ComfyUI techniques.
Important Notes:
Software Installation is NOT a part of the workshop. Participants must have all the software installed before starting the workshop.
Ismail Seleit is a distinguished architect at Foster + Partners, renowned for his expertise in computational design and BIM (Building Information Modeling). His work encompasses various projects, from design competitions to complex building realizations. Ismail is dedicated to integrating advanced AI tools, such as Stable Diffusion and ControlNet, into architectural workflows, pushing the boundaries of design innovation. Beyond architecture, he is an accomplished ambient-electronic music producer, creating soundtracks for architectural films and various independent projects, showcasing his multifaceted creativity and technical skills.
In this workshop, we will go through all the needed knowledge to enable you to harness the power of AI image diffusion models for practical architecture design workflows. After completing this workshop, you will be able to use these models as design assistants that can add quality to your project narratives and help you come up with fresh ideas through all stages of design. We will be going through the basics of how these models were trained, how they function, and how to use them either locally on your machine or on a web service. Then, we will be focusing more on advanced workflows and combinations of models that could specifically enhance architecture design processes so you can use them in your day-to-day tasks as an architect.
The workshop will involve two of the best state-of-the-art models: SDXL and Flux, where we will be explaining the best use cases for each. It will involve an array of User Interfaces to run them, like Automatic1111, ComfyUI, Fooocus, and more. We will be combining these models with ControlNets and custom-trained LoRA models to guide toward specific compositions and aesthetics.
After attending this workshop, you will be able to create and share your own custom workflows. We will be touching on many subjects such as text to image, image to image, Rhino to image, Revit to image, Enscape to image, In-context generation, image referencing, regional conditioning, inpainting, upscaling, and more. This will be the complete package for you to start designing with AI diffusion models.
Asifa 2025-01-22 16:11:32
Very informative and insightful sessions. Thank you for the sessions. Look forward for more intreactions in future.
Asifa 2025-02-02 15:35:59
Good Session