Alibaba’s newest video model,Wan 2.6, is now open to try. Ima Studio is a co-launch partner, bringing early access and creator subsidies so you can start generating for free.

Wan 2.6 is Alibaba’s newest AI video model, upgraded from Wan 2.5 and built to like Sora 2. Ima Studio is a launch partner, bringing Wan 2.6’s multi-shot storytelling, 15-second output, and stronger reference consistency directly into creation workflows.
To make it easier to use, Ima Studio provides 100+ optimized prompt templates for short dramas, talking videos, product demos, and more — turning a powerful model into something creators can use instantly.







Open Ima Studio’s AI Video Generator and select Wan 2.6 to enable multi-shot storytelling, audio-driven acting, and high-stability 1080p output.

Start with a voice clip, a reference video, a single photo, or pure text — Wan 2.6 adapts to whichever workflow you use.

Click Generate, and Wan 2.6 handles everything automatically: motion planning, lip sync, pacing, and scene consistency.
Ima Studio is an official co-launch partner for Wan 2.6, so creators can try the model at launch and stay close to capability updates as they roll out.
Instead of starting from a blank box, Ima Studio offers clickable presets and ready-to-run prompt structures for short dramas, talking videos, product demos, dialogue, and more—so users get a strong first result faster.
Wan 2.6 is powerful, but different projects need different strengths. Ima Studio gives you access to multiple leading models in one place, so you can choose the best fit per task without switching tools.
With Ima Arena, you can compare outputs across models using the same prompt and inputs—making it easier to evaluate style, consistency, and pacing before committing to a workflow.
Ima Agents help turn a rough idea into usable prompts and repeatable generation steps—especially helpful for teams producing series content, campaigns, or daily publishing pipelines.
Generate content for vertical, horizontal, or square formats with a consistent workflow. The goal is less “rescue work” and more outputs that are ready to publish.






Wan 2.6 is a next-generation AI video model built for scene-level video creation. It generates longer, more structured outputs with multi-shot pacing, clearer visuals and native audio-visual alignment, making videos more usable straight from generation.
Most models stop at clip-level output. Wan 2.6 supports storytelling: longer segments, smoother flow, cleaner visuals, reference-driven control, and audio-linked talking performance. This means less stitching, less fixing, and more video that looks ready to use.
You can generate short dramas, talking videos, micro-stories, social content, virtual influencer clips, product demos, brand ads and e-commerce visuals. The model is especially strong at content that needs pacing, performance, or narrative flow.
Yes. Wan 2.6 features native audio-visual sync and improved lip alignment, making talking videos, commentary content and virtual influencers feel more real and less “AI-stitched.”
No. Many outputs arrive with pacing and shot flow built in, meaning less cutting, syncing or repairing. You can refine results if you want, but usable footage is generated from the start.