Feature riffing: Using AI to refine product ideas before design

Doug
CPO

In my experience, working with mockups and prototypes early in the ideation process is a key behavior of successful product managers. It's easy to jot down "add new feature here," but that rarely captures the real impact of a change. Once you start examining the actual UI, its behaviors, and transitions between states, you uncover nuances, edge cases, and missing logic that words alone don't reveal.
Over the years I’ve used Balsamiq for this. The way it resembles a scribble on the back of a napkin clearly indicates it’s not meant to be the specification and invites collaboration. But recently, AI has become my new go-to for early thinking. It can generate sample data, apply large-scale changes, and offers enough UX guidance to keep PMs from drifting too far off track.
Before we go further, this process is meant to refine ideas, not replace UX. I've led product and UX teams for 15 years, and the point of this "feature riffing" is to walk into conversations better prepared, not more prescriptive. Great UX partners need the freedom to explore. This is simply the work I do to organize my thinking before bringing them in.
With that clarified, let's dive in!
Setting the stage: Improving story splitting in Atono
One area in Atono I want to improve is splitting stories. We believe strongly in smaller, well-scoped stories and aim to decompose everything down to size medium or less. Although we have a ‘Duplicate story’ feature, it doesn't quite support true splitting.
So I drafted an initial story—nothing fancy yet, but enough to get started:

With the basics down, I wanted to see how the flow would actually feel—what the user would click, how the dialog would behave, and where friction points might appear.
Next, I grabbed a screenshot of our current story editing UI, started a new chat in Claude Desktop, uploaded the screenshot, and prompted:
I would like to create a few mockups for STORY-648. Attached is the current UI with the action menu open. Can you mock up the story page with the revised menu and the split-story UI described in AC#2?
Claude recognized I was referring to a story and automatically pulled the details from Atono via our MCP server—a tool that lets Claude (or other AI tools) directly access story content, acceptance criteria, and context without me needing to copy and paste.

First pass: Letting the AI generate a mockup
Claude's first mockup added icons to each menu option—not what we use today, but easy enough to correct. I asked it to remove the icons, and it complied.

From there I opened the new "Split story" option, and Claude built out a dialog for defining the split. After testing it for a few minutes, a hole in my original story became obvious.
Discovery phase: When interaction exposes missing logic
I had specified that acceptance criteria (ACs) should be selectable, but I hadn’t considered that ACs are hierarchical and that brings some other considerations to the table.
Claude revealed a scenario like this:

Selecting a child AC without the parent raises questions:
-
Should the parent come along for context?
-
If only some children are selected, should the parent be copied or moved?
-
Should child ACs automatically inherit the parent's selection state?
This is where AI becomes most valuable: the interaction exposes the blind spots you didn’t know to look for.
To tighten the logic, I gave Claude a prompt:
Acceptance criteria are hierarchical. When a child is selected, all parents should be selected. If a parent is selected, all children should initially be selected but can be unselected. If a parent is unselected, all children should be unselected.
Claude updated the mockup to reflect this logic.
This forced me to refine the acceptance criteria in my story—and to specify that if a parent is selected without all children selected, it should be copied, not moved, to the new story to avoid losing context.
Using AI to generate better sample data
The sample ACs in the mockup weren't deep enough to really test the interactions with the user interface, so I asked Claude:
Can you add more second-level acceptance criteria so the hierarchical behavior is clearer?
This is one of AI's quiet superpowers: generating sample data. I find you get far better insights into UI behavior when the data feels real, and isn’t Lorem Ipsum.
Exploring alternative UI patterns
At this point the main flow was working, but I was curious whether there were better patterns. So I asked:
Do you have any other UI patterns you would recommend to solve the problem?
Claude suggested eight alternatives, including a split view for seeing the original and new story at the same time, and a drag-and-drop interface. I tried a few ("Let's see the Side-by-Side Split"), but ultimately felt the original dialog best demonstrated what matters most to the persona (which conveniently in this case, was me).
Closing the loop: Finalizing the story
Before finishing, I asked:
Are there any acceptance criteria I should add to my story?
Claude caught a good one: what happens if someone tries to split without selecting any ACs to move or copy? (Side note: should I tell Claude "You are absolutely right"?)

Our current MCP implementation can create stories but not update them (I already have my first enhancement request from… myself!), so I pasted the AC in manually, had Claude update the mockup to match, and attached the HTML prototype to the story in Atono.
At this point, I felt the story was groomed and ready to pull in my UX partner and the engineering team lead to walk them through what I had and get their feedback. My UX partner can then figure out how to make the real deal in Figma so we can submit this story to the engineering team for sizing.
Here are some screenshots of the interactive mockup in action starting with the menu:

And the dialog for performing the split:

A quick note: Claude vs. ChatGPT for mockups
I don't own any Anthropic stock, so this is unbiased, but at the time of this article I've found the mockup interaction to work much better with Claude (Sonnet 4.5) than ChatGPT (GPT-5). Claude does a better job analyzing the screenshots I hand it and representing our user interface in the mockups it builds. It generated cleaner, higher-contrast mockups, accurately modeled hierarchical ACs, and retained the full story context from screenshots.
The same operations in ChatGPT required a lot more tuning. The first mockup was nearly invisible because of the lack of contrast, the sample acceptance criteria weren't hierarchical, and it only showed part of the story. I tuned it up a bit, but it still just didn't get to the same place as Claude.

With that said, even imperfect mockups are useful if you call out the gaps during collaboration. Sometimes articulating what doesn't work is as helpful as articulating what does!
Conclusion: Why feature riffing works
Feature riffing with AI doesn't replace product discovery, UX, or engineering collaboration—it helps me show up better prepared.
A few things I’ve learned from doing this regularly:
-
Start with the real UI, not a blank canvas. Screenshotting our actual story editing interface gave Claude the context it needed to generate something immediately useful.
-
Use realistic data. Generic placeholders hide complexity. Real acceptance criteria with actual nesting showed me how selection logic would actually work.
-
Actually click through everything. That's how I discovered the hierarchical AC problem—by trying to select a child without its parent.
-
Ask what else could go wrong. Claude caught the "what if nothing is selected?" edge case I'd completely missed.
-
Time-box it to 20-30 minutes. This whole session took about 20 minutes. If you're polishing pixels, you've gone too far.
By the time I bring a story to UX and engineering, the team isn't starting from a blank slate. They're reacting, improving, and evolving the idea—not trying to decode it.
In this case, a 20-minute jam session turned a vague "make ACs selectable" into a well-defined interaction model with clear edge case handling. That's the kind of thinking that used to happen in the third refinement meeting—now it happens before the first one.
This is where AI shines: not in producing the final design, but in sharpening your understanding of what the design needs to solve.