Emotional Toolbox & Character Design
Cinema-grade emotional design for conversational avatars. State of the art, gaps, and design questions.
A conversational avatar is not just a talking face. Behavioral fidelity requires an explicit emotional design layer: defining, encoding, and activating a repertoire of emotional states consistent with the character's personality, history, and interaction context.
Emotional repertoire
Define a set of discrete and continuous emotional states per character. Each state encodes: facial expression, vocal prosody, cadence, posture, micro-behaviors.
Transition & coherence
Transitions between emotional states must be smooth, personality-consistent, and not create perceptible breaks in the experience. Challenge: avoiding the 'emotional uncanny valley' effect.
Contextual activation
Emotional state is activated by conversation content, interaction history, and user signals (tone, rhythm, content). Research: real-time detection of incoming emotional signals.
Key differentiation dimension
No current commercial platform offers an explicit, creator-configurable emotional design system. Most leave the LLM to implicitly decide emotional state, without guaranteed control or coherence. The question is whether an emotional toolbox inspired by actor direction methods is technically feasible, and at what design and implementation cost.
Competitive landscape — Emotional AI
| Platform | Configurable emotions | Creator control | Real-time | Note |
|---|---|---|---|---|
| Tavus (Raven-1) | ✓ Partiel | ✗ Implicite | ✓ | Incoming emotional perception (Raven-1), no creator toolbox |
| LemonSlice (LS-2.1) | ✓ Partiel | ✓ API | ✓ | Emotion API + Action API, but no repertoire design |
| Anam | ✓ Partiel | ✗ Implicite | ✓ | Built-in emotional intelligence, not configurable |
| HeyGen, Simli, D-ID | ✗ | ✗ | ✓ | Lip-sync only, no emotional layer |
| Target system (hypothesis) | ✓ Complet | ✓ Toolbox | ✓ | Repertoire + transitions + contextual activation + actor direction — to be validated by research |