April 26, 2026 · 5 min read
Kimodo in Blender


Paweł Pierzchlewicz
CEO
Most AI motion tools live somewhere that isn't your animation tool. You generate something in a web app, export an FBX, import it back, retarget the bones, fix what broke in retargeting, and then start the actual work. Every step away from your timeline is a step away from the decisions you're trying to make.
Today we're shipping an answer. Kimodo in Blender — the motion model running inside Blender, through an add-on we maintain called Proscenium. You write a prompt in the Blender N-panel, hit Generate, and editable keyframes land on your armature. No round-trips, no remaps, no separate viewer.
What Kimodo is
Kimodo is a text-conditioned diffusion model for 3D character motion. It takes a humanoid skeleton and a description — "a person walks forward, breaks into a sprint, then slides to a stop" — and writes the motion that fits.
We didn't train Kimodo from scratch. The upstream model is research from NVIDIA's Toronto AI Lab. What we built is the path from that model to a Blender timeline — a maintained fork with fixes for common install and runtime issues, an open-source backbone that exposes the model over an open HTTP protocol, and a Blender add-on that drives it.
How to run Kimodo in Blender
You don't need to set up a server, download weights, or learn a new tool. Four steps:
- Install the Proscenium add-on. Download the latest release ZIP from github.com/animatica-ai/proscenium-blender and drop it into Blender's add-on settings (Edit › Preferences › Add-ons).
- Sign in to Animatica — or tick the Self-hosted checkbox in the same preferences pane to point the add-on at your own MMCP-compatible server. Both paths use the same plugin and the same protocol.
- Pick an armature and write a prompt. Any humanoid skeleton works; the cloud retargets server-side. Add constraints if you want explicit control: keyframes for the poses that matter, a Bézier curve for the root path, empties to pin hands or feet.
- Hit Generate. Kimodo bakes editable keyframes onto your armature. Accept the take to keep it, reject to roll back to your source action — your work is preserved either way.
That's the whole flow. No FBX export, no retargeting pass, no separate viewer.
What it feels like in Blender
Kimodo respects what you've already authored. You don't lose your work to the model — the model works around it.
- Pin the poses that matter. Block your key moments and the model fills the inbetweens. Every keyframe is a hard constraint, never overwritten on regeneration.
- Sketch a Bézier curve. The character's root locks to it; the body stays alive on top. No retiming, no foot-skate.
- Pin a hand to an empty. That hand stays where you put it across the whole clip.
- Type the action. Plain-language prompts turn into clean 30-fps motion baked onto your armature.
After Generate you get a non-destructive preview: Accept keeps the take, Reject rolls back to your source action.
Hosted or self-hosted
Animatica's hosted service handles the GPU and ships the full feature stack — server-side retargeting to your rig, text-to-pose, foot-skate cleanup. Sign in and you're generating motion in Blender in minutes.
Or run it yourself. motionmcp-kimodo is the Apache-2.0 reference backbone — runs Kimodo on your GPU over the open MMCP protocol, no telemetry, no cloud round-trip. Same plugin, same protocol — switch by toggling a checkbox in the add-on's preferences.
What we're committing to
Animators have been telling us for the last year that AI motion belongs in the tool, not next to it. Kimodo in Blender is the first version of that.
The model is open source. The plugin is open source. The protocol is open source. The only thing that isn't, at the moment, is the GPU we run for you — and even that is something you can replace with your own.
Get the add-on at animatica.ai/kimodo. Read the source on GitHub. We'd love to hear what's missing.