Stable Diffusion-ControlNet and its integration with Blender for architectural visualization

AI Creative Challenge 4.0_ Winner01

Become A Digital Member

Subscribe only for €3.99 per month.
Cancel anytime!

Advance your design skills

join PAACADEMY’s online workshops to learn more about parametric and computational design

© AI-generated by Melih Gurcan Kutsal

Midjourney joined our life on the 12th of July 2022 and Stable Diffusion 22nd of August 2022. Nearly a year ago AI joined our daily life, and in time it become a part of our profession as well. Not even a year ago we had so many debates about whether can AI make art or if are we about to lose our jobs. While all of these happened right now, we are in the next blink of an era with ControlNet, a game changer in AI image generation.

While we create images of our concept or imagination, we don’t have much control over where the objects going to be or what will be the position of our character. Even defining the camera angle requires a different prompt to locate it. With a new key component called ControlNet in Stable Diffusion, thanks to it right now we can create more appealing new AI-generated images with meets the specific design criteria or objectives.

If you’re curious about using Stable Diffusion + ControlNet with architectural visualization tools such as Rhino and Blender, join PAACADEMY’s Taking Control: Midjourney X ControlNet – Studio Carlos Banon!


The ControlNet architecture is indeed a type of neural network that is used in the Stable Diffusion AI art generator to condition the diffusion process. The diffusion process, in which the model applies a series of transformations to a noise vector to generate a new image, is a critical component of the generator. The ControlNet architecture is used to guide the diffusion process by providing additional inputs to the generator.

The ControlNet architecture can be used to control various image properties such as color, texture, and style. For example, by conditioning the diffusion process with the ControlNet architecture, the generator can be trained to generate images with a specific color palette or texture. The generator can produce more targeted and visually appealing images by controlling the diffusion process in this manner. Only with small sketches, depth maps, canny edges, or even with a pose, you can specify your images easier with ControlNet. Not just images, but your videos as well.

The ControlNet architecture is made up of a collection of convolutional neural networks that have been trained to predict the next step in the diffusion process given the previous step and a conditioning input. An image feature vector, a class label, or any other type of information relevant to the image generation process can be used as the conditioning input. The Stable Diffusion AI art generator can generate images that are more customized and targeted to specific design criteria or objectives by utilizing the ControlNet architecture.


© Image by SongZi

After a short time, ControlNet came out, and a new tool came up for Blender as well in Github by a coder coolzilj also known as SongZi. This tool might be a bit hard to install in your Blender but we recommend you try it and here is why.

This Blender-ControlNet allows you to connect your Blender with Stable Diffusion, ControlNet. This means whatever you can do in ControlNet, you can do in Blender. Although there is one major difference, you can control your camera more freely.

Of course, if you have the specific image taken with your desired camera angle that will not be a problem but you can’t always find what you want or can’t take that shot from above. The connection between Blender and ControlNet solves that problem by using simple pre-modeled characters and defining their pose.

However, that’s not the point that interests us, the architects. Throughout our professional and student life, we generated lots of design ideas. Although expressing those ideas through sketches, abstract 3d models or physical models was not easy or not clear enough to express ourselves. Due to that sometimes a passion starts to die when even that passion takes the first step on its way. 

With this tool in Blender, you can model abstract models, and reference small openings or floors. Last but not least, you may put around small boxes as a representative of a structure. After your abstract modeling and setting your camera angle, now you can let your imagination run wild. You can write and define your ideas in the prompt section and try to find an image that represents your idea the best. Maybe along the way, it will improve your way of thinking as well. 

© Image by CG Vortex

Although, this tool is not just for representing your design idea or generating concept ideas. If you have a finished project and all the details are ready however you are not pretty good with visualization, this tool can be a used for you as well. You can model your design, it does not be detailed just requires major parts to be included, such as wide openings, extrusions, or slopes on the roof. After defining major aspects, you can get an image that looks closest to your design. Maybe due to AI’s commentary, you can get a second opinion from it. 

© Image by SongZi

ControlNet gives us an opportunity to turn our sketches into concept images to express our ideas. Right now, the connection between Blender and ControlNet is thanks to Designer, Coder, and AI Whisperer Songzi. We can express our ideas in an easier and better way. Not just that, we can use it as a tool for Architectural Visualization as well or just have fun with it, the sky is the limit. Well, at least used to be.

Share with a friend:

Learn about parametric and computational from the online courses at the PAACADEMY:

Leave a Comment

Your email address will not be published. Required fields are marked *

Become A Digital Member

Subscribe only for €3.99 per month. Cancel anytime!

Weekly Newsletter in Your Inbox

Explore More

Sponsored Content

Subscribe to our weekly newsletter