01·Context & Vision

GamiWays Project

Collaboration between Memoways (Geneva, 14 years of interactive video expertise) and Gamilab (voice-first AI startup, Audiogami SDK).

GamiWays Research Portal·Gamilab × Memoways·Geneva, Switzerland
01
Product Vision — Two Modes, One Engine

Pedagogical mode + Narrative mode, shared engine.

Hover over engine components for technical details. Both modes share the same infrastructure.

PRODUCT ARCHITECTURE — TWO MODES, ONE SHARED ENGINEMODE 01 — PEDAGOGICALPedagogical / "Lean Forward"Tutor avatar · Illustrative video · QuizEdTech · Heritage · Corporate trainingPedagogical control: STRONGMODE 02 — NARRATIVENarrative / "Lean Back"Character avatar · Full screen · Voice navigationCinema · Gaming · Creative economyNarrative freedom: STRONGCONTINUUM0–100%SHARED ENGINE — GamiWays CoreNode EditorConversational graphAI Memory3 layers (R&D)OrchestrationDeterministic + OrganicAvatar EngineReal-time generationMulti-Stream5 synced streamsSovereign infrastructure: Exoscale GPU (CH) · Audiogami ASR · WebRTC · PostgreSQLAnalogy: Final Cut Pro — powerful AND usable by non-technical creators
Click to expand
02
Competitive Gap

No solution combines all 5 criteria.

Hover over cells for details. Full latency benchmarks in State of the Art.

Solution
Real-time <2s
Behavioral fidelity
Sovereignty
Conv. memory
Narrative control
HeyGen
Commercial
Synthesia
Commercial
NVIDIA ACE
Enterprise
Beyond Presence
Enterprise
HeyGem OS
Open Source
GamiWays
Target
Yes
Partial
No
R&D
Click to expand
03
Competitive Comparison

HeyGen · Synthesia · Flowise vs GamiWays

Comparison on 7 criteria. Full analysis of 11 solutions in State of the Art. R&D challenges detailed in Research Challenges.

RADAR COMPARISON — AVATAR PLATFORMS246810Visual qualityLatencyCost/accessibilitySovereigntyAI conversationBody languageMulti-styleLegendHeyGenNVIDIA ACEGamiWays (target)HeyGem OSLemonSlice (LS-2.1)GamiWays = R&D target(not yet achieved)Normalized scores /10 · Qualitative assessment · Latency = inverted score (10 = <100ms)
Click to expand
04
Founding Projects

Two prototypes validated with real users.

P12023–present

Le Dilemme Plastique

EdTech

Educational tool on ocean plastic pollution. Students converse with an AI avatar 'Peter' who guides them through environmental science topics.

11
Teachers interviewed
100%
Want to experiment
78%
Already use AI
6.9/10
Current tool satisfaction
P22024–present

Parle à AVA!

Interactive Cinema

Interactive narrative experience allowing viewers to converse with characters from Romed Wyder's dystopian film 'Where is AVA?'. Photorealistic avatars of film characters.

05
Infrastructure & Technical Expertise

Operational today vs. R&D required.

Audiogami ASR→STTAVAILABLE

Production pipeline, API + SDK, Swiss-hosted, optional HITL

Exoscale PartnershipAVAILABLE

Swiss GPU cloud for sovereign open-source AI deployment

Two functional prototypesAVAILABLE

Tested with real users, documented feedback

NocoDB self-hostedAVAILABLE

Content database with rich metadata

Flowise ExpertiseAVAILABLE

Multi-agent orchestration, RAG integration

Conversational memoryGAP

Architecture for long-duration sessions without token explosion

Real-time expressive avatarGAP

Generation <500ms with body language and behavioral coherence

Personalized prosodic TTSGAP

Capture of individual prosodic fingerprint

06
Core Engine — Vision & Principles

From content delivery to interactive experience orchestration.

The GamiWays Core is a headless orchestration engine for guided interactive experiences. It is not an application — it is a foundation layer reusable across multiple products: learning, storytelling, cultural mediation, corporate training.

📚
Learning

Adaptive learning experiences — the avatar guides learners through structured objectives, remembers their progress and adapts content.

🎭
Storytelling

Interactive narratives where characters remember, evolve and respond to viewer choices — beyond linear dialogue.

🏛️
Cultural Mediation

Virtual guides for museums, heritage sites and exhibitions — context-rich, multilingual, sovereign experiences.

🏢
Corporate Training

Professional situation simulations with specialized avatars — onboarding, compliance, soft skills, continuous assessment.

6 guiding principles

01
Experience First

Technology serves the conversation — never the reverse. Every architectural choice is evaluated against the final user experience.

02
Orchestration over Generation

Decide before generating. The Game Master evaluates global context and guides the experience asynchronously — LLM generation is a consequence, not a starting point.

03
Context is the Product

What we inject into the LLM defines what we receive. Context management (memory, world, knowledge) is the primary technical differentiator.

04
LLM-Agnostic Always

No provider lock-in. The Core can switch between OpenAI, Anthropic, Mistral or self-hosted models without changing business logic.

05
Keep Core Small

The Core does not include UI, voice, video avatars or authoring tools. These layers build on top — the Core stays minimal, focused, stable.

06
Measure Everything That Matters

Latency, cost, tokens, quality — measured from day one. Evidence-based iteration, not intuition.

07
Key Concepts

Avatar + Game Master: two agents, one experience.

The Core is structured around precise concepts that define a shared vocabulary between Gamilab and Memoways.

ConceptRoleDescription
AvatarConversational actorAI persona with identity, personality, autonomy and its own memory. Interaction surface — not the product itself.
Game Master (GM)Async directorUnderstands global experience state, guides asynchronously via triggers and directives. Decides when to intervene, switch avatars or inject guidance.
SessionDurable containerContainer for a user run within a scenario. Persists across conversations, maintains global progression state.
ConversationDialogue episodeBounded dialogue episode with one avatar within a session. Each conversation has its own sliding window context.
ScenarioExperience templateDefines objectives, assigned avatars, knowledge sources, progression rules and completion conditions.
Memory3-layer systemWorking memory (sliding window + cumulative summary), episodic persistence (session summaries), long-term user facts. Deterministic selection policy.
Context ManagerContext assemblerAssembles 3 dimensions: Memory (what happened), Experience/World (rules, objectives), Knowledge (external sources). Deterministic injection into the LLM.
Knowledge PipelineInternal RAGIngestion (PDF, MD, text), chunking, embeddings, pgvector retrieval, context-aware filtering. Compaction and injection into the GM/Avatar flow.
08
Build Roadmap

Three phases, one vision.

PHASE AMinimal CoreApril → July 2026
In Progress

Build and validate the fundamental loop: user input → context assembly → orchestrated avatar response → memory update. Operational back-office. Text-based prototype with one real scenario.

Monorepo platform (pnpm + Turborepo)LLM loop with provider abstractionAsync Game Master v1 (Director–Actor)Memory System v3 (3 layers)Next.js back-office + runtime inspectorLLM observability (Langfuse self-hosted)Summer prototype with real scenario
PHASE BEnhanced ExperiencesTBD
Planned

Add voice (STT + TTS streaming), multimedia triggers, multiple scenarios, richer memory systems, and a user-facing frontend.

STT integration (Deepgram / Whisper)TTS streaming (Cartesia / Inworld TTS-2)GM-driven multimedia triggersUser frontend with session historyGuided progression engine
PHASE CResearch & Scale ReadinessTBD
Planned

Prepare the platform for advanced integrations: expressive avatars, multi-tenancy, scaling, SDKs and partnerships.

Real-time expressive video avatar integrationMulti-tenancy & security (JWT, RBAC)Public SDK & versioned APIDeveloper documentationResearch partnerships
Detailed epic tracking

Full epic table with ✅/🔄/⏳ status — synced from the development repository.

View build status