ElevenLabs Is Expanding Beyond Voice. So I Tested It.
Most people know ElevenLabs for its ultra-realistic AI voice generation.
What fewer people are paying attention to is what’s happening around it.
ElevenLabs is now partnering with platforms like Veo, Kling, Runway and others. The ecosystem is shifting from isolated tools to interconnected creative systems.
One platform. Multiple modalities. Blurred lines.
That’s where things get interesting.
As someone who’s been testing generative AI tools since the early days, I don’t take feature announcements at face value. I test them.
So I ran a slightly unconventional experiment:
A fashion editorial scene.
Florida beach.
Banana as a phone.
The goal wasn’t the joke. The goal was evaluation.
I was watching for:
• Lip sync precision
• Dialogue timing
• Facial stability under strong lighting contrast
• Object interaction integrity
The results were… interesting.
Rather than summarize it here, I shared the full breakdown and video demonstration on LinkedIn.
You can see the full experiment, tool stack, and analysis here:
More controlled AI experiments are coming as these platforms continue to converge.