AI Glossary · technique

What is ControlNet?

Definition
ControlNet is a neural network architecture that adds spatial conditioning to AI image generation. Instead of relying on text alone, ControlNet lets you supply a pose skeleton, depth map, or edge map to control composition and structure — so a generated image matches the exact pose or layout you want.

Pure text-to-image models struggle with precise spatial control. If you prompt 'a person dancing' you get a random pose. ControlNet solves this by adding a second input stream — an image-derived condition like a stick-figure pose skeleton — that guides the generation's spatial structure while the text prompt controls style and content.

Introduced in early 2023, ControlNet made Stable Diffusion practical for production use cases that need reproducible layouts: product photography, architectural visualization, character sheets, and pose-matched series of images.

How it works

Control image extraction

A pre-processor converts an input (usually a reference photo) into a structural representation — a pose skeleton from OpenPose, depth map from MiDaS, or Canny edges from OpenCV.

Trainable branch

ControlNet clones the Stable Diffusion encoder into a trainable branch. The control image flows through this branch and its outputs are injected into the main model's denoising steps.

Multi-ControlNet composition

Multiple ControlNets can be applied simultaneously — e.g., combining a depth map (for scene geometry) with a pose skeleton (for the subject), giving you control over both.

Types & variants

  • OpenPose
    Matches human body poses — ideal for character consistency and action shots.
  • Canny
    Matches edges — best for preserving product or building silhouettes.
  • Depth
    Matches scene depth — preserves 3D layout while changing style or subject.
  • Lineart
    Matches clean line drawings — converts sketches to finished renders.
  • Scribble
    Matches rough scribbles — loosest control, most creative latitude.

Common use cases

  • Generating product photography with a consistent pose across multiple AI images
  • Converting rough sketches into polished illustrations without losing composition
  • Character sheets with the same pose from different angles
  • Architectural visualization preserving building proportions while changing materials

Frequently asked questions

Do I need a GPU to use ControlNet?
To run ControlNet locally, yes — typically 8+ GB of VRAM. On Oakgen, ControlNet-enabled generations run on our GPUs and cost slightly more credits than plain text-to-image.
Can I stack multiple ControlNets?
Yes. A common combo is OpenPose + Depth to lock both the subject's pose and the scene's 3D layout, then let text drive style and appearance.

Further reading

Related terms

What is ControlNet? Pose and Structure Control for AI Images | Oakgen | Oakgen.ai