The role of canvas in the AI-assisted design process

The role of canvas in the AI-assisted design process
Photo by Kaleidico / Unsplash

The longer I work with agentic AI code generation, the more I realize that the canvas/whiteboard based approach is still needed, except for designers, it’s no longer going to be the only medium they touch. The role of the canvas changed with the new processes and tools: it’s now a place to go back to when seeing something is much more natural than trying to describe it.

Below are a couple of examples where I found that visual thinking on a canvas helps me deliver a better final product.

The early stages

When you prompt an AI tool, you have to have an idea in your head and the vocabulary to describe it. Articulation barrier is real – we know that from recent studies – you might not have the idea in your head or you might have problems expressing it in text. Or both.

A canvas lets you think with your hands. You throw a rectangle here, a grainy texture there, and a typeface in the corner. You "move things around until they look right", as Milton Glaser used to say.

My Figma or Paper files started looking more and more like scrapbooks these days. It’s messy. It’s fast. It allows me to stumble into an idea, and that’s the whole point.

User journey thinking

Expressing the entire flow on a canvas lets you zoom out and see the big picture all at once and then zoom back in to see the details. Being able to visually map what’s happening and zoom in and out as needed is still a very useful experience and helps spotting issues at the edges where things connect, which are otherwise pretty hard to spot in a code-based tool.

Extra points if you can generate actual flows of actual application and then tweak it. Some modern canvas tools like Paper already allow getting pretty close to that.

Premature convergence

Converging on a solution with AI-assisted coding can be really fast. All this newfound speed lets you generate a final-looking prototype extremely rapidly, and this can be dangerous.

Problem and solution space divergence is extremely important in design, and because a lot of AI-generated prototypes look pretty final, you might not explore alternatives as much. This makes it very easy to switch into over-optimization: polishing the output too early before figuring out if this is the right output in the first place.

Yeah, the code got cheaper and those tools make it much easier to just scrap and start from scratch, but there is already some waste in doing that, and you might end up learning too late.

The precision problem

Describing precise and subtle visual treatments with a prompt can get maddening, especially with indeterministic nature of the models. For high-polish detail, dragging a node two pixels to the left or tweaking drop shadows (Beautiful Shadows is still my favorite plugin in Figma) is still faster than going through prompts to figure it out. Yes, you can technically just straight up add CSS to the prompt, but that's still slower than just moving things on a canvas.

A lot of visual craft happens visually and directly manipulating items until they look right is still much easier than describing.

Wrapping it up

I'm using code generation daily (in fact, the entire codebase of my portfolio is AI-generated) and I actually hope AI eats the boring parts. Flows that generate automatically so you can spend your energy reviewing and improving them, not building them from scratch? Count me in.

I do believe that the whiteboard isn't going anyway, though. We, designers, as visual creatures, probably won't let it go as easily.

Mastodon