AI & Parametric Negotiation 2.0

This workshop explores how designers can merge parametric design with generative AI to transform animations into site-integrated architectural visuals.

50 Seats
Jan 3, 4, 2026
13:00 - 17:00 GMT
Saturday - Sunday
Lessons in Progress
Beginner
8 Hours
Certificate — Learn
English
Unlimited Access
€100.00
€85.00
Last 1 seats at this price!
Generative AI as a technology is typically seen as a command-and-follow relationship. We rarely find a middle ground where the designer can employ the statistical probability of AI, and the AI can draw from the designer’s experience.

This workshop continues our exploration of that balance between generative AI and parametric design, building on the pedagogy established in Part 1, where we converted parametric patterns into architectural space.
In this workshop, we will push this methodology further with Video, specifically abstract pattern videos from Grasshopper, and convert them into architectural spaces.

While doing this, we will also consider the site where our design proposal will be implemented, creating a flow from abstraction to design to on-site realization. The aim is neither to control AI fully nor to accept what it generates passively, but to cultivate a shared human-AI latent space in which both agents contribute to the design process.
Explore how designers can create hybrid workflows with AI.
Critically think about how much AI influence we want in our designs by controlling the inputs.
Create a negotiable space between parametric design processes and AI tools.
Learn the language and workflows that AI understands, and identify where its limitations arise.
Understand how small changes in the workflow can drastically alter the output.
Develop workflows connecting parametric modeling to site integration with AI.
Build an iterative design mindset, redefining the role of architects and designers in the AI landscape.
This workshop is designed for participants who are new to ComfyUI but have a basic understanding of Rhino3D. We will begin by exploring different generative AI models to identify which ones best suit our workflow. Newer models often produce higher-quality results but take longer to generate, while older models are faster but may compromise on quality.

Participants will then create short animation sequences in Grasshopper and convert them into architectural spaces using AI models such as Flux, SDXL, and Gemini 2.5 inside ComfyUI. Throughout the process, we will use concepts like upscaling, ControlNet, IPAdapter, and image refinement. Finally, selected frames will be morphed onto real sites chosen by the participants.

By the end of the workshop, participants will have developed an iterative workflow that connects parametric logic, generative AI interpretation, and contextual design visualization.

The workshop aims to move beyond standalone generative AI tools and integrate them directly into design processes such as parametric modeling. Participants will use Grasshopper3D to create parametric animations, ComfyUI to convert those animations into architectural spaces placed on real sites, and Premiere Pro to combine them into cohesive video sequences.
This workshop aims to move beyond standalone generative AI tools and integrate them directly into design processes such as parametric modeling. Participants will use Grasshopper3D to create parametric animations, ComfyUI to convert those animations into architectural spaces placed on real sites, and Premiere Pro to combine them into cohesive video sequences.
Stage 1 | Parametric Animations 

Participants will learn key Grasshopper3D concepts like data trees, color coding, and wire connections, and explore how to animate frames using sliders and different plugins. In this stage, we will be working with abstract patterns, but participants can choose their own designs as well. The focus is on building an understanding of animation within the Grasshopper interface.

Stage 2 | Animation to ComfyUI Render

This stage introduces core AI concepts such as checkpoint models, VAEs, and prompts before translating Grasshopper frames into architectural renders using ComfyUI. Participants will use ControlNet models (Depth and Sketch) and IPAdapter to guide and style their outputs.

Stage 3 | Render to Site

Participants will select preferred designs and integrate them onto real sites using upscaling with SDXL and FLUX, followed by compositing through Gemini 2.5 Flash Preview. Image editing will refine lighting, weather, and time of day for contextual realism.

Stage 4 | Combining in Adobe Premiere Pro

Participants will bring their sequences together in Premiere Pro, learning basic editing, keyframing, adding text, and making video adjustments to create smooth animations.

Program:

Day 1 – Parametric Animation to Rough Architectural Space

Tools: Rhino3D, Grasshopper3D (Plugin – Heteroptera), ComfyUI (SDXL Turbo, SDXL Depth + Sketch ControlNet, IPAdapter), Adobe Premiere Pro

Generating Parametric Animation from Grasshopper

  • Introduction to data trees, color coding, sliders, and wire connections

  • Adjusting camera location for better output

  • Extracting animation frames

Animated Frames to Architectural Space

  • Retaining the design language of animation using ControlNet

  • Transferring style from an existing image with IPAdapter

Combining Frames into Video in Adobe Premiere Pro

  • Introduction to the Premiere Pro interface, importing stills, and learning key frames
  • Adjusting motion properties to get a smooth animation
  • Assignment 1 + Q&A
  • Outcome: Final video output

Day 2 – Site Implementation of the Design

Tools: ComfyUI (Flux.1 Dev, Gemini 2.5 Flash Image (Nano Banana), Flux Kontext Max), Adobe Premiere Pro, Kling or WAN 2.2

Upscaling One Frame from Day 1

  • Cropping the image

  • Comparing SDXL and FLUX upscaling methods

  • Exploring denoising and image refinement concepts

Merging the Upscaled Design with the Site

  • Upscaling the site photograph

  • Morphing the generated design onto the site context

Image Editing

  • Changing views, climate, or interior scene of the design

Combining Everything

  • Video generation with Kling or WAN 2.2
  • Combining in Adobe Premiere Pro
  • Assignment 2 + Q&A
  • Outcome: Final video and 3D model output

Instructors:

Biography
Kedar is an Architect, Computational & AI Designer at Kedar Undale Design Studio, specializing in consulting, education, and creating art through parametric design. He holds a master’s degree from IAAC, Barcelona, and has worked as a Computational Designer with Toronto-based Partisans. Kedar assists architects and designers in realizing complex, form-driven designs by translating conceptual ideas into executable solutions. Over five years, he has taught global workshops on ‘Parametric Thinking’ at institutions like DigitalFUTURES, InterAccess Canada, PowrPlnt New York, and CEPT India. Since the advent of generative AI, Kedar has delved into creating custom workflows and led training sessions at organizations and faculty development programs including FUTURLY, PAACADEMY, UID Gujarat, and KLS GIT Belagavi. His art, featured in Homegrown Magazine India, leverages computational tools to visualize intangibles like sound, wind, and gravity.
No reviews yet.

No comments found.