Designing with LLMs in the toolbox: 6 months later

Designing with LLMs in the toolbox: 6 months later
Photo by Christopher Gower / Unsplash
🚧
Disclaimer: This blog post has been edited and rewritten using Claude and Gemini.

It's always good to be on a lookout for ways to improve your day-to-day workflows and be able to automate away at least some work that isn't very creative or interesting. Or when you're feeling stuck and unsure what to do next. This is where an AI model steps in: I think of them like a Swiss Army knife for design work.

Getting unstuck

If you ever worked on a complex problem you probably know this feeling - you've been stuck at a local maximum for hours or days, and you know there might be a better way but struggling to find it. At this point, when I exhausted all my options for creativity, I'll use a model as another tool in my toolkit. Simply describe the concept in as much detail as possible, following the best practices for prompting I gathered over months, and ask the AI to generate alternative approaches.

I might have a login screen design and ask, "What are five completely different ways we could reimagine user authentication?" The model generates ideas I'd never have considered – some (let's be honest - most) completely unfeasible and realistic, some surprisingly brilliant.

Prototyping with code

One of my favorite tricks is using an AI model to bridge the gap between design and code. With the current state of models like Claude or Bolt, you can upload a Figma design and prompt the AI to generate a clickable prototype in code, and it will do a pretty good job if the design isn't too complex – and having the prototype as close as possible to the real thing also improves the testing capability beyond just screens that are connected with spaghetti.

Documenting work

Writing documentation is fun, but only at times. Generally, I switched to generating first draft with a LLM for:

  • Initial design briefs drafts
  • Summarized user research notes
  • Translating complex documentation into clear, accessible descriptions
💡
Prompting the model to use Plain Language guidelines drastically removes the amount of fluff that the models tend to generate when you just straight up prompt them with a question.

Summarizing insights

Analyzing user interviews and research used to take forever. Now, I can quickly:

  • Summarize long interview transcripts
  • Identify key patterns in user feedback
  • Organize complex research data

The biggest issue is avoiding the temptation to trust the summaries and patterns right away. Like everything else, a generated summary brings some bias into the process, so always stay alert and mindful of that. You can cross-reference output of multiple models and still do your own review if you want to be triple sure.

In the end, it's just a tool

It is very easy to get carried away when using models like that – things that took hours or days are reduced to a couple of seconds waiting for the model to spit out the output. Do not. Contrary to some AI talking heads would like you to believe, those models are non-deterministic, carry a ton of bias in the training data, and are still generally not trustworthy. I found that you should not treat AI as omnipotent intelligence that knows everything – it's much more productive in the long run to treat it like a very junior coworker that you delegate some simpler tasks to: you trust it to get it right, but won't hurt to double-check.

Use them – especially that your manager probably asks you to use them, and it can actually impact your perceived performance at your role – but do not trust them.

Mastodon