How Our Learning Memory Engine Tracks What You Know (and What You Don't)
A technical look at mastery scoring, weak topic detection, and how the AI adapts content difficulty to your current level.
Adaptive learning platforms promise personalization. Most deliver a quiz at the end. We built a real learning memory engine that tracks 15 signals per student, continuously updates a mastery score per topic, and adjusts downstream AI generations based on weak areas.
What we track
Every meaningful action — pack generated, quiz submitted, flashcard reviewed, problem solved — creates a LearningEvent. Events carry format (audio/video/text/interactive), topic, difficulty, and outcome (correct/incorrect/partial).
A nightly job (plus ad-hoc triggers from the admin dashboard) processes unprocessed events through three analyzers:
- Format preference analyzer — do you retain better from audio or text? We score each format on a 0-1 scale.
- Session pattern analyzer — your optimal session length, preferred time of day, break cadence.
- Retention analyzer — spaced repetition optimization based on your quiz performance over time.
How it affects your experience
- Cheatsheets bias toward your weak subtopics when you regenerate for a subject you've struggled with.
- Quizzes oversample questions from areas with low mastery.
- Daily suggestions surface topics where your retention curve indicates imminent forgetting.
Confidence gating
The engine won't act on a profile until it has 5+ data points — below that, it just collects. This prevents cold-start noise where a student gets pigeonholed based on one bad quiz.
What you can see
Students see their own mastery scores per topic under the Progress tab. Platform admins can view aggregate patterns (but never individual student content). Institutes see batch-level weak-topic summaries in their analytics dashboard.
Everything is stored in your own user record, never shared across students, never trained into any model.