01·The Project·Founding Prototypes

The concrete origins of GamiWays

GamiWays was not born from an abstract idea. It is the generalization of two functional prototypes, tested with real users, which each revealed the same fundamental challenges: latency, experience personalization, immersion and voice quality.

Storygami·Parle à AVA!
Edugami·Le Dilemme Plastique
00
From Prototypes to Platform

Two experiences, one shared engine.

Parle à AVA! and Le Dilemme Plastique are two radically different experiences on the surface — one narrative and cinematic, the other pedagogical and scientific. Yet they share exactly the same technical challenges: STT→LLM→TTS pipeline, conversational agent orchestration, session memory management, perceived latency and French voice quality.

This convergence gave birth to GamiWays: a generic platform capable of supporting both the Storygami experience (interactive cinema) and the Edugami experience (guided learning), sharing the same orchestration engine.

Latency & fluidity
STT→LLM→TTS pipeline <2s
French voice quality
Naturalness, prosody, immersion
Session memory
Continuity without token explosion
Expressive avatar
Lip-sync + body language sync
01
Parle à AVA! — Storygami
Storygami·Interactive cinema

Parle à AVA!

Prototype 1 — v0.20.1 — 32h of development

The experience

Dive into a dystopian thriller with "Parle à Ava !" — an immersive conversational audiovisual experience accessible on desktop and mobile. You are catapulted into a mountain chalet where the family of Emma, Max, Ava (9 years old) and Léo (15 years old) tries to survive a virus that transforms women into men (the "protogyny").

"Parle à Ava !" is the interactive extension of the film "Where is Ava?", a dystopian thriller in production (public presentation planned for 2026). Explore the film's initial context, discover the family's stakes and secrets, and prepare to live a personalised conversational story, full of twists and surprises.

Through conversational AI and hyperrealistic synthetic videos, engage in unique dialogues with the characters of this world on the brink. Ask questions, express your hypotheses and explore complex themes in a world in peril. Each conversation reveals a fragment of a possible reality, with all its contradictions and complexity, to confront you with your own values.

The objective of this digital project: to make you think. Explore your reactions to the themes addressed and experiment with your relationship to your own certainties and moral principles through this original interactive experience.

Key metrics

v0.20.1
Current version
21
Versions delivered
32h
Development time
8+
LLM models supported

User journey

A/B Onboarding
Gumlet video intro
Max incoming call
Voice-to-voice conversation
Dynamic video triggers
Trust gate
~50-field questionnaire

Tools used

LovableMain vibe coding — 17 sessions
SupabaseEdge Functions + PostgreSQL + pgvector
OpenRouterMulti-model LLM (Qwen, Claude, GPT-5, Gemini, Grok)
DeepgramLive STT nova-2 FR — Push-to-Talk
ElevenLabsTTS eleven_multilingual_v2 — Max voice
Voyage AIvoyage-3 embeddings + rerank-2.5
PostHogSession recording analytics
Development time
32h
LLM cost tracking is automatic via openRouterLLM.ts — input/output tokens + USD cost recorded per session in the admin dashboard.
02
Le Dilemme Plastique — Edugami
Edugami·Voice-first education

Le Dilemme Plastique

2 prototypes — 70h of development — ~380 CHF

The project

Dilemme Plastique is an innovative project combining documentary expertise, artificial intelligence and active pedagogy to raise awareness of environmental issues related to plastic pollution. It is designed to evolve into the generalisable educational tool Edugami.

The project was born from the meeting between Peter Charaf, a documentary filmmaker who spent a decade (since 2015) travelling the world twice to document the plastic problem in partnership with the Race for Water Foundation, and the Memoways team. This collaboration resulted in a unique proprietary database: field-verified images, videos and testimonials covering environmental pollution, health impacts, recycling challenges and emerging solutions.

After a first deterministic prototype presented at the Semaine des Médias in French-speaking Switzerland in February 2024, the team made a major technological pivot: abandoning the linear no-code approach in favour of a system 100% powered by generative AI. The student can communicate with Peter Charaf's avatar via voice or text, in an environment designed to maximise pedagogical engagement.

Long-term vision: Edugami is an educational tool generalisable to other themes. "We replace plastic with something else, another theme, history, another social issue." — Peter Charaf

Two approaches, one pedagogical challenge

Light / Tutorielv2.6.0

Peter guides 12–18 year-old students through the analysis of an image of the Place des Nations in Geneva to discover 6 hidden clues. Sessions ≤5 min, 24+ simultaneous. Pipeline: Deepgram live → OpenAI Assistants API → ElevenLabs.

25h
Dev
~220 CHF
Cost
Flowise / Completlive

Split-screen interface: Peter chat (left) + media panel (right). Peter orchestrated via Flowise (28 nodes). 20–30 min sessions, no account. Includes videos, embedded articles, Postgres persistence, admin console, PostHog analytics.

45h
Dev
~160 CHF
Cost

Latency optimisations — measured values

TTS hit (LRU cache) — Light & Flowise~40 ms

Measured locally (CHANGELOG v1.2.0 Light + Flowise)

Pre-warmed welcome TTS — Flowise2 569 ms → ~40 ms

Measured at server boot — paid once for all students (Flowise CHANGELOG)

Pre-generated session resume — Light150–500 ms

Instead of 3–5s — measured (Light CHANGELOG)

Overall gains (Phase 1 + 2: -40 to -55% total latency) are design estimates documented in CHANGELOG v1.2.0, not PostHog measurements. The AVA latency dashboard (admin) collects real per-session data but no aggregated values are published in the repo.

Tools used and development costs

Light / Tutorial Prototype
25h
Development
~220 CHF
Total cost

Tools: Replit Agent (vibe coding + hosting), Claude Code, OpenAI GPT-4o Assistants API + Whisper, ElevenLabs TTS.

Flowise / Full prototype — 45h — 160 CHF
Replit AgentMain development + hosting
FlowisePeter orchestration — 28 nodes (Memoways self-hosted)
OpenAIGPT-4o via Flowise — Peter conversations
ElevenLabsPer-sentence TTS + Scribe STT (WER 2.11%)
DeepgramNon-final live transcription (visual feedback)
PostHog + RectifyProduct analytics + session recording
Total160 CHF— 45h
03
Convergence → GamiWays

The same challenges, one answer.

Despite radically different use contexts, both prototypes converged on the same unmet needs. This convergence is the technical and strategic justification for GamiWays.

ChallengeAVA (Storygami)Dilemme (Edugami)GamiWays solution
Pipeline latencyGM+Max parallelization, fail-open validatorTTS LRU cache, pre-generated resume, Deepgram liveRuntime State + SSE, <2s latency budget per layer
Session memorysummarize-session (Facts/Topics/Promises, every 4 turns)additional_instructions as source of truthMemory System v2 — episodic + semantic + procedural
French voice qualityElevenLabs stability 0.6, speed 0.92 — 'Clear & articulate' presetElevenLabs Scribe STT WER 2.11% — TTS eleven_multilingual_v2Documented Voice Pipeline — 17 TTS + 10 STT compared
Agent orchestrationAutonomous Game Master (trust, triggers, game over)Flowise 28 nodes — complexity = bottleneckContext Engine v2 — 7 dimensions, headless Game Master
Expressive avatarGumlet video + dynamic triggers (R&D)Student avatar (thumbnail) — Peter without video avatarEpic C.1 — Expressive Avatar Integration (Phase C)
Data sovereigntySupabase Edge Functions — server-side keysReplit + Neon Postgres — classroom dataLLM-agnostic architecture, Exoscale Swiss GPU cloud
From these two prototypes, GamiWays was born.

Discover the product vision, target architecture and Core Engine build status.