Skip to main content
Neurako
← All articles
FSRSSM-2algorithmAnkispaced repetitionmachine learning

FSRS vs SM-2: Why the Algorithm Behind Your Flashcard App Matters More Than You Think

For 30 years, SM-2 was the gold standard of spaced repetition. Then FSRS came along and changed everything. Here's what the difference actually means for your daily study sessions.

April 1, 2026


FSRS vs SM-2: Why the Algorithm Behind Your Flashcard App Matters More Than You Think

When you rate a flashcard, something happens in the background that most people never think about: an algorithm decides when you'll see that card again. It might be tomorrow. It might be in 47 days. Get that timing right, and you'll remember the information with minimal effort. Get it wrong, and you'll either waste time reviewing cards you already know — or forget the ones you needed most.

For most of spaced repetition's modern history, that algorithm was SM-2. Created by Piotr Wozniak in 1987 for SuperMemo, SM-2 formed the backbone of Anki, the flashcard application used by millions of students globally. It was a genuine breakthrough.

Then, in 2022, a researcher named Jarrett Ye published FSRS — the Free Spaced Repetition Scheduler — and the field was never quite the same.


How SM-2 Works

SM-2's core logic is elegant in its simplicity. After you review a card and rate your recall, the algorithm adjusts two values:

  • Ease Factor — a per-card multiplier that determines how aggressively the interval grows
  • Interval — the number of days before the next review

The default ease factor starts at 2.5. If you consistently rate a card as "Good," the interval grows exponentially: 1 day → 6 days → 15 days → 37 days → 93 days, and so on. If you rate it "Hard" multiple times, the ease factor decreases — "ease hell," in the Anki community's parlance — and the intervals never grow, trapping you in an endless cycle of reviewing the same card every few days.

This rigidity is SM-2's fundamental limitation. The intervals are calculated, not predicted. The algorithm has no model of how human memory actually decays. It doesn't know when you're about to forget something — it just applies a formula and hopes for the best.


How FSRS Works Differently

FSRS approaches spaced repetition as a prediction problem.

Rather than asking "What should the next interval be?", FSRS asks: "When will the probability of correctly recalling this card drop to my target retention rate (e.g., 90%)?" That's the moment to schedule the next review — not sooner (wasted effort), not later (forgotten).

To make this prediction, FSRS models three variables for every card:

1. Difficulty (D)

A score from 1–10 representing how intrinsically hard this information is for you. A card about a concept you already had background knowledge on might have a difficulty of 2. A foreign vocabulary word in an unfamiliar script might be 8.5.

2. Stability (S)

Measured in days, stability represents how long a card can be left alone before recall drops from 100% to 90%. A newly learned card might have a stability of 1–2 days. A well-reviewed card might have a stability of 300+ days, meaning you only need annual reviews to maintain 90% recall.

3. Retrievability (R)

The current probability — right now, at this exact moment — that you could correctly recall this card. If R = 0.9, you have a 90% chance of getting it right. If R = 0.3, you've likely already forgotten it.

When you rate a card, FSRS uses these three variables plus 21 machine learning parameters to compute a new interval. Those 21 parameters are not arbitrary — they were fitted to tens of thousands of real review logs from actual learners, then refined via techniques like maximum likelihood estimation and stochastic gradient descent.


The Numbers: How Much Better Is FSRS?

The evidence is substantial:

20–30% fewer reviews for the same retention. Independent benchmarks comparing FSRS to SM-2 on identical card decks consistently show FSRS users achieving the same knowledge retention with roughly a quarter fewer review sessions. For a serious student doing 200 reviews per day, that's 50 reviews saved — every single day.

Personalized parameters. FSRS includes an optimizer that analyzes your own review history and adjusts its 21 parameters to fit your personal memory curves. The default parameters are already excellent (derived from millions of reviews), but a personalized optimizer — available after roughly 1,000 reviews — makes the algorithm even more accurate for your specific learning patterns.

No "ease hell." Because FSRS doesn't use a fixed ease factor, there's no downward spiral where difficult cards get permanently trapped in short intervals. Difficulty is recalculated continuously based on your actual recall performance.

Transparent optimization target. You can set your desired retention rate (default: 90%) and FSRS will tell you your projected daily workload for any retention setting. Some learners find that dropping from 90% to 75–80% retention dramatically reduces their review burden while still providing excellent long-term recall of the material that matters most.


Real-World Impact: A Day in the Life

Imagine you're studying for a professional certification exam six months away, with a deck of 800 cards.

With SM-2: Your daily review count fluctuates unpredictably. Some days you review 50 cards, others 200. Cards you found easy early on keep coming back at the same frequency as hard ones. Cards you struggled with get trapped in short intervals. By month three, you've spent an average of 90 minutes a day reviewing — much of it redundant.

With FSRS: Your daily queue is smoother and more predictable. Cards you've mastered drift toward monthly or annual intervals, clearing out of your daily queue entirely. Difficult cards receive more frequent, precisely timed reviews. Your daily review time stabilizes at around 45 minutes — for the same retention outcome.

The difference isn't just time. It's sustainable. Students who experience "review fatigue" from SM-2 often abandon their decks. FSRS-powered review schedules are calibrated to stay within your working memory's daily capacity.


FSRS in Academic Research

FSRS has attracted serious academic attention. The algorithm's developer, Jarrett Ye, published a paper at the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining — one of the most competitive venues in machine learning research — while still an undergraduate student.

A 2025 research paper (LECTOR) independently benchmarked FSRS against six competing scheduling algorithms across 100 simulated learners over 100 days. FSRS achieved a 89.6% learning success rate — the second highest of any algorithm tested, behind only LECTOR (which requires LLM inference for every review, making it far more computationally expensive). For a practical, locally-runnable algorithm, FSRS had no peer.

FSRS has been implemented in Python, Go, Rust, TypeScript, Swift, Dart, and a dozen other programming languages, and is now used in Anki (as of 2023), RemNote, and a growing list of educational applications — including Neurako.


Why Neurako Uses FSRS

When we built the spaced repetition engine in Neurako, we had one question: what gives learners the best return on time invested?

The answer was unambiguous. FSRS's combination of cognitive science grounding, machine learning personalization, and real-world performance data made it the only reasonable choice.

Every review you complete in Neurako feeds FSRS's model of your memory. Every rating you give — "Again," "Hard," "Good," or "Easy" — isn't just scheduling the next review. It's training a personal model of how your brain retains information. Over time, as your review history grows, the algorithm becomes increasingly accurate for you specifically.

We also expose your retention analytics directly: you can see your current retention rate, total cards mastered, and daily review trends in your Neurako dashboard. The goal isn't to keep you studying more — it's to help you study less, while remembering more.


Should You Care About the Algorithm?

For casual flashcard use — learning a few dozen words before a trip — the algorithm barely matters. But for serious, long-term learning projects — medical school, language acquisition, professional certifications, bar prep — the algorithm is everything.

The difference between SM-2 and FSRS, extrapolated over a year of daily study, is potentially hundreds of hours. Hours you could spend on deeper understanding, practice problems, sleep, or life.

The algorithm is invisible when it's working well. You just find yourself remembering things you thought you'd forgotten, with less effort than you expected.

That's FSRS working. And that's why it's the engine at the heart of Neurako.


References

  • Ye, J. et al. (2022). A Stochastic Shortest Path Algorithm for Optimizing Spaced Repetition Scheduling. Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining.
  • Wozniak, P. (1990). Optimization of learning. SuperMemo World.
  • IntelliKernelAI. (2025). LECTOR: LLM-Enhanced Concept-based Test-Oriented Repetition for Adaptive Spaced Learning. arXiv:2508.03275.
  • Open Spaced Repetition community. (2023). ABC of FSRS. GitHub Wiki: open-spaced-repetition/fsrs4anki.
  • Denicola, D. (2025, May). Spaced Repetition Systems Have Gotten Way Better. domenic.me.
  • RemNote Help Center. (2024). The FSRS Spaced Repetition Algorithm.
  • PyPI. (2026). fsrs package — py-fsrs v6.3.1.


Ready to put this into practice?

Create AI-powered flashcard decks and start your streak today.

Try Neurako free →