When AI Gets Steam Wrong (and Why That Matters)

Lately, I’ve been running a series of small, controlled experiments with text-to-video AI tools. Not cinematic showcases. Not marketing demos. Just simple prompts designed to test one thing at a time.

This most recent one was deceptively simple:
A cup of coffee with visible steam.

Same prompt. Different tools. Four outputs.

What followed on LinkedIn was more interesting than the videos themselves.

See the actual videos on LinkedIn.

The Prompt Was the Easy Part

Steam seems obvious. Humans understand it intuitively.

It’s translucent.
It has volume but no edges.
It reacts to light.
It reveals the background rather than obscuring it.

But AI models don’t experience steam. They infer it.

That’s where things start to get revealing.

What the Experiment Showed

Across the outputs, a pattern emerged quickly:

  • Some models rendered steam like smoke.

  • Others treated it like fog or vapor with mass.

  • A few added movement but lost continuity.

  • One came close, but introduced a subtle loop jump.

What fascinated me wasn’t which one “won.”
It was why viewers responded differently to each result.

Several comments pointed out the same thing independently:

“The steam that worked best was translucent enough to see into the background.”

That single observation tells you everything.

Why Translucency Is a Litmus Test

Steam isn’t about motion alone.
It’s about perceptual honesty.

When steam blocks the background completely, your brain flags it as smoke.
When it behaves too uniformly, it reads as an overlay.
When it has volume and restraint, it feels real.

This is where many generative video models still struggle. They’re excellent at motion, texture, and drama—but subtle physical phenomena expose the gaps in understanding.

The “Tiny Jump” Problem

One output was repeatedly chosen as the strongest, even though viewers noticed a small jump in the loop.

Why was it forgiven?

Because realism is hierarchical.

We’ll tolerate minor continuity issues if the underlying physics feel right. But we won’t accept smooth motion if the material itself is wrong.

That’s an important takeaway for anyone using AI tools professionally.

This Was Never About Picking a Winner

I didn’t publish this experiment to crown a platform.

I published it to surface questions like:

  • How does this model interpret vapor?

  • Does it understand translucency?

  • Can it balance motion with restraint?

  • Where does it break under simplicity?

These are the questions that actually matter when you’re deciding which tool to use for a specific creative task.

More Experiments Coming

This won’t be the last one.

Sometimes I’ll push realism.
Sometimes abstraction.
Sometimes failure on purpose.

Because progress doesn’t happen where everything looks perfect—it happens where the illusion cracks just enough for us to see how it’s built.

Holly Picano

Featured in Forbes and Who’s Who Generative AI Author

https://www.hollypicanodigital.com
Previous
Previous

ElevenLabs Is Expanding Beyond Voice. So I Tested It.

Next
Next

Yahoo Finance Mentions Our AI On-Demand Course