10

Spaced Repetition for Efficient Learning

 1 year ago
source link: https://www.gwern.net/Spaced-repetition
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Spacing effect

“You can get a good deal from rehearsal,
If it just has the proper dispersal.
You would just be an ass,
To do it en masse,
Your remembering would turn out much worsal.”

Ulrich Neisser⁠⁠3

My current favorite prosthesis is the class of software that exploits the spacing effect⁠, a centuries-old observation in cognitive psychology, to achieve results in studying or memorization much better than conventional student techniques; it is, alas, obscure⁠⁠4⁠.

The spacing effect essentially says that if you have a question (“What is the fifth letter in this random sequence you learned?”), and you can only study it, say, 5 times, then your memory of the answer (‘e’) will be strongest if you spread your 5 tries out over a long period of time - days, weeks, and months. One of the worst things you can do is blow your 5 tries within a day or two. You can think of the ‘forgetting curve’ as being like a chart of a radioactive half-life: each review bumps your memory up in strength 50% of the chart, say, but review doesn’t do much in the early days because the memory simply hasn’t decayed much! (Why does the spacing effect work, on a biological level? There are clear neurochemical differences between massed and spaced in animal models with spacing (>1 hour) enhancing long-term potentiation but not massed⁠⁠5⁠, but the why and wherefore - that’s an open question; see the concept of memory traces or the ⁠sleep studies.) A graphical representation of the forgetting curve:

Stahl et al 2010; CNS Spectrums

Stahl et al 2010; CNS Spectrums

Even better, it’s known that active recall is a far superior method of learning than simply passively being exposed to information.⁠⁠6 Spacing also scales to huge quantities of information; gambler/​financier Edward O. Thorp harnessed “spaced learning” when he was a physics grad student “in order to be able to work longer and harder”⁠⁠7⁠, and Roger Craig set multiple records on the quiz show Jeopardy! 2010–2011 in part thanks to ⁠using Anki to memorize chunks of a collection of ⁠>200,000 past questions⁠⁠8⁠; a later Jeopardy winner, Arthur Chu, also used spaced repetition⁠⁠9⁠. Med school students (who have become a major demographic for SRS due to the extremely large amounts of factual material they are expected to memorize during medical school) usually have thousands of cards, especially if using pre-made decks (more feasible for medicine due to fairly standardized curriculums & general lack of time to make custom cards). Foreign-language learners can easily reach 10-30,000 cards; one Anki user reports a deck of >765k automatically-generated cards filled with Japanese audio samples from many sources (“Youtube videos, video games, TV shows, etc”).

A graphic might help; imagine here one can afford to review a given piece of information a few times (one is a busy person). By looking at the odds we can remember the item, we can see that cramming wins in the short term, but unexercised memories decay so fast that after not too long spacing is much superior:

Wired (original, Wozniak?); massed vs spaced (alternative)

Wired (original, Wozniak?); massed vs spaced (⁠alternative)

It’s more dramatic if we look at a video visualizing decay of a corpus of memory with ⁠random review vs most-recent review vs spaced review⁠.

If you’re so good, why aren’t you rich

Most people find the concept of programming obvious, but the doing impossible.⁠⁠10

Of course, the latter strategy (cramming) is precisely what students do. They cram the night before the test, and a month later can’t remember anything. So why do people do it? (I’m not innocent myself.) Why is spaced repetition so dreadfully unpopular, even among the people who try it once?⁠⁠11

Scumbag Brain meme: knows everything when cramming the night before the test / and forgets everything a month later

Because it does work. Sort of. Cramming is a trade-off: you trade a strong memory now for weak memory later. (Very weak⁠⁠12⁠.) And tests are usually of all the new material, with occasional old questions, so this strategy pays off! That’s the damnable thing about it - its memory longevity & quality are, in sum, less than that of spaced repetition, but cramming delivers its goods now⁠13⁠. So cramming is a rational, if short-sighted, response, and even SRS software recognize its utility & support it to some degree⁠⁠14⁠. (But as one might expect, if the testing is continuous and incremental, then the learning tends to also be long-lived⁠⁠15⁠; I do not know if this is because that kind of testing is a disguised accidental spaced repetition system, or the students/​subjects simply studying/​acting differently in response to small-stakes exams.) In addition to this short-term advantage, there’s an ignorance of the advantages of spacing and a subjective illusion that the gains persist⁠⁠16⁠17 (cf. Son & Simon 2012⁠18⁠, Mulligan & Peterson 2014⁠, Bjork et al 2013⁠, Deslauriers et al 2019); from Kornell 2009’s study of GRE vocab (emphasis added):

Across experiments, spacing was more effective than massing for 90% of the participants, yet after the first study session, 72% of the participants believed that massing had been more effective than spacing….When they do consider spacing, they often exhibit the illusion that massed study is more effective than spaced study, even when the reverse is true (Dunlosky & Nelson, 1994⁠; Kornell & Bjork, 2008a; Simon & Bjork 2001⁠; ⁠Zechmeister & Shaughnessy, 1980).

As one would expect if the testing and spacing effects are real things, students who naturally test themselves and study well in advance of exams tend to have higher GPAs.⁠⁠19 If we interpret questions as tests, we are not surprised to see that 1-on-1 tutoring works dramatically better than regular teaching and that tutored students answer orders of magnitude more questions⁠⁠20⁠.

This short-term perspective is not a good thing in the long term, of course. Knowledge builds on knowledge; one is not learning independent bits of trivia. Richard Hamming recalls in “You and Your Research”:

You observe that most great scientists have tremendous drive. I worked for ten years with John Tukey at Bell Labs⁠. He had tremendous drive. One day about three or four years after I joined, I discovered that John Tukey was slightly younger than I was. John was a genius and I clearly was not. Well I went storming into Bode’s office and said, “How can anybody my age know as much as John Tukey does?” He leaned back in his chair, put his hands behind his head, grinned slightly, and said, “You would be surprised Hamming, how much you would know if you worked as hard as he did that many years.” I simply slunk out of the office!

What Bode was saying was this: “Knowledge and productivity are like compound interest⁠.” Given two people of approximately the same ability and one person who works 10% more than the other, the latter will more than twice outproduce the former. The more you know, the more you learn; the more you learn, the more you can do; the more you can do, the more the opportunity - it is very much like compound interest. I don’t want to give you a rate, but it is a very high rate. Given two people with exactly the same ability, the one person who manages day in and day out to get in one more hour of thinking will be tremendously more productive over a lifetime. I took Bode’s remark to heart; I spent a good deal more of my time for some years trying to work a bit harder and I found, in fact, I could get more work done.

Knowledge needs to accumulate, and flashcards with spaced repetition can aid in just that accumulation, fostering steady review even as the number of cards and intellectual prerequisites mounts into the ⁠thousands⁠.

This long term focus may explain why explicit spaced repetition is an uncommon studying technique: the pay-off is distant & counterintuitive, the cost of self-control near & vivid. (See hyperbolic discounting⁠.) It doesn’t help that it’s pretty difficult to figure out when one should review - the optimal point is when you’re just about to forget about it, but that’s the kicker: if you’re just about to forget about it, how are you supposed to remember to review it? You only remember to review what you remember, and what you already remember isn’t what you need to review!⁠⁠21

The paradox is resolved by letting a computer handle all the calculations. We can thank ⁠Hermann Ebbinghaus for investigating in such tedious detail than we can, in fact, program a computer to calculate both the forgetting curve and optimal set of reviews⁠⁠22⁠. This is the insight behind spaced repetition software: ask the same question over and over, but over increasing spans of time. You start with asking it once every few days, and soon the human remembers it reasonably well. Then you expand intervals out to weeks, then months, and then years. Once the memory is formed and dispatched to long-term memory, it needs but occasional exercise to remain hale and hearty⁠⁠23 - I remember well the large dinosaurs made of cardboard for my 4th or 5th birthday, or the tunnel made out of boxes, even though I recollect them once or twice a year at most.

Literature review

But don’t take my word for it - Nullius in verba! We can look at the science. Of course, if you do take my word for it, you probably just want to read about how to use it and all the nifty things you can do, so I suggest you ⁠skip all the way down to that section. Everyone else, we start at the beginning:

Background: testing works!

“If you read a piece of text through twenty times, you will not learn it by heart so easily as if you read it ten times while attempting to recite from time to time and consulting the text when your memory fails.” –The New Organon, Francis Bacon

The testing effect is the established psychological observation that the mere act of testing someone’s memory will strengthen the memory (regardless of whether there is feedback). Since spaced repetition is just testing on particular days, we ought to establish that testing works better than regular review or study, and that it works outside of memorizing random dates in history. To cover a few papers:

  1. Allen, G.A., Mahler, W.A., & Estes, W.K. (1969). “Effects of recall tests on long-term retention of paired associates”⁠. Journal of Verbal Learning and Verbal Behavior, 8, 463-470

    1 test results in memories as strong a day later as studying 5 times; intervals improve retention compared to massed presentation.

  2. Karpicke & Roediger (2003). ⁠“The Critical Importance of Retrieval for Learning”

    In learning Swahili vocabulary, students were given varying routines of testing or studying or testing and studying; this resulted in similar scores during the learning phase. Students were asked to predict what percentage they’d remember (average: 50% over all groups). One week later, the students who tested remembered ~80% of the vocabulary versus ~35% for non-testing students. Some students were tested or studied more than others; diminishing returns set in quickly once the memory had formed the first day. Students reported rarely testing themselves and not testing already learned items.

    Lesson: again, testing improves memory compared to studying. Also, no student knows this.

  3. Roediger & Karpicke (2006a). “Test-Enhanced Learning: Taking Memory Tests Improves Long-Term Retention”

    Students were tested (with no feedback) on reading comprehension of a passage over 5 minutes, 2 days, and 1 week. Studying beat testing over 5 minutes, but nowhere else; students believed studying superior to testing over all intervals. At 1 week, testing scores were ~60% versus ~40%.

    Lesson: testing improves memory compared to studying. Everyone (teachers & students) ‘knows’ the opposite.

  4. Karpicke & Roediger (2006a). “Expanding retrieval promotes short-term retention, but equal interval retrieval enhances long-term retention”

    General scientific prose comprehension; from Roediger & Karpicke 2006b: “After 2 days, initial testing produced better retention than restudying (68% vs. 54%), and an advantage of testing over restudying was also observed after 1 week (56% vs. 42%).”

  5. Roediger & Karpicke (2006b). “The Power of Testing Memory: Basic Research and Implications for Educational Practice”

    Literature review; 7 studies before 1941 demonstrating testing improves retention, and 6 afterwards. See also the reviews “Spacing Learning Events Over Time: What the Research Says” & “Using spacing to enhance diverse forms of learning: Review of recent research and implications for instruction”⁠, Carpenter et al 2012.

  6. Agarwal et al 2008, ⁠“Examining the Testing Effect with Open- and Closed-Book Tests”

    As with #2, the purer forms of testing (in this case, open-book versus closed-book testing) did better over the long run, and students were deluded about what worked best.

  7. Bangert-Drowns et al 1991. “Effects of frequent classroom testing”

    Meta-analysis of 35 studies (1929-1989) varying tests during school semesters. 29 found benefits; 5 found negatives; 1 null result. Meta-study found large benefits to testing even once, then diminishing returns.

  8. Cook 2006, “Impact of self-assessment questions and learning styles in Web-based learning: a randomized, controlled, crossover trial”⁠; final scores were higher when the doctors (residents) learned with questions.

  9. Johnson & Kiviniemi 2009, “The Effect of Online Chapter Quizzes on Exam Performance in an Undergraduate Social Psychology Course” (“This study examined the effectiveness of compulsory, mastery-based, weekly reading quizzes as a means of improving exam and course performance. Completion of reading quizzes was related to both better exam and course performance.”); see also McDaniel et al 2012⁠.

  10. Metsämuuronen 2013, “Effect of Repeated Testing on the Development of Secondary Language Proficiency”

  11. Meyer & Logan 2013, “Taking the Testing Effect Beyond the College Freshman: Benefits for Lifelong Learning”⁠; verifies testing effect in older adults has similar effect size as younger

  12. Larsen & Butler 2013, “Test-enhanced learning”

  13. Yang et al 2021, “Testing (Quizzing) Boosts Classroom Learning: A Systematic And Meta–Analytic Review”

(One might be tempted to object that testing works for some learning styles⁠, perhaps verbal styles. This is an unsupported assertion inasmuch as the experimental literature on learning styles is poor and the existing evidence mixed that there are such things as learning styles.⁠⁠24)

Subjects

The above studies often used pairs of words or words themselves. How well does the testing effect generalize?

Materials which benefited from testing:

  • foreign vocabulary (eg. Karpicke & Roediger 2003, Cepeda et al 2009⁠, Fritz et al 2007⁠⁠25⁠, de la Rouviere 2012)
  • GRE materials (like vocab, Kornell 2009); prose passages on general scientific topics (Karpicke & Roediger, 2006a; Pashler et al, 2003)
  • trivia (McDaniel & Fisher 1991)
  • elementary & middle school lessons with subjects such as biographical material and science (⁠Gates 1917⁠; Spitzer 1939⁠26 and Vlach & Sandhofer 2012⁠⁠27⁠, respectively)
  • Agarwal et al (2008): short-answer tests superior on textbook passages
  • history textbooks; retention better with initial short-answer test rather than multiple choice (Nungester & Duchastel 1982)
  • LaPorte & Voss (1975) also found better retention compared to multiple-choice or recognition problems
  • ⁠Duchastel & Nungester, 1981: 6 months after testing, testing beat studying in retention of a history passage
  • Duchastel (1981): free recall decisively beat short-answer & multiple choice for reading comprehension of a history passage
  • Glover (1989): free recall self-test beat recognition or Cloze deletions⁠; subject matter was the labels for parts of flowers
  • Kang, McDermott, and Roediger (2007): prose passages; initial short answer testing produced superior results 3 days later on both multiple choice and short answer tests
  • Leeming (2002): tests in 2 psychology courses, introductory & memory/​learning; “80% vs. 74% for the introductory psychology course and 89% vs. 80% for the learning and memory course”⁠⁠28

This covers a pretty broad range of what one might call ‘declarative’ knowledge. Extending testing to other fields is more difficult and may reduce to ‘write many frequent analyses, not large ones’ or ‘do lots of small exercises’, whatever those might mean in those fields:

A third issue, which relates to the second, is whether our proposal of testing is really appropriate for courses with complex subject matters, such as the philosophy of Spinoza, Shakespeare’s comedies, or creative writing. Certainly, we agree that most forms of objective testing would be difficult in these sorts of courses, but we do believe the general philosophy of testing (broadly speaking) would hold-students should be continually engaged and challenged by the subject matter, and there should not be merely a midterm and final exam (even if they are essay exams). Students in a course on Spinoza might be assigned specific readings and thought-provoking essay questions to complete every week. This would be a transfer-appropriate form of weekly ‘testing’ (albeit with take-home exams). Continuous testing requires students to continuously engage themselves in a course; they cannot coast until near a midterm exam and a final exam and begin studying only then.⁠⁠29

Downsides

Testing does have some known flaws:

  1. interference in recall - ability to remember tested items drives out ability to remember similar untested items

    Most/​all studies were in laboratory settings and found relatively small effects:

    In sum, although various types of recall interference are quite real (and quite interesting) phenomena, we do not believe that they compromise the notion of test-enhanced learning. At worst, interference of this sort might dampen positive testing effects somewhat. However, the positive effects of testing are often so large that in most circumstances they will overwhelm the relatively modest interference effects.

  2. multiple choice tests can accidentally lead to ‘negative suggestion effects’ where having previously seen a falsehood as an item on the test makes one more likely to believe it.

    This is mitigated or eliminated when there’s quick feedback about the right answer (see Butler & Roediger 2008 “Feedback enhances the positive effects and reduces the negative effects of multiple-choice testing”). Solution: don’t use multiple choice; inferior in testing ability to free recall or short answers, anyway.

Neither problem seems major.

Distributed

A lot depends on when you do all your testing. Above we saw some benefits to testing a lot the moment you learn something, but the same number of tests could be spread out over time, to give us the spacing effect or spaced repetition. There are hundreds of studies involving the spacing effect:

Almost unanimously they find spacing out tests is superior to massed testing when the final test/​measurement is conducted days or years later⁠⁠30⁠, although the mechanism isn’t clear⁠⁠31⁠. Besides all the previously mentioned studies, we can throw in:

The research literature focuses extensively on the question of what kind of spacing is best and what this implies about memory: a spacing that has static fixed intervals or a spacing which expands? This is important for understanding memory and building models of it, and would be helpful for integrating spaced repetition into classrooms (for example, ⁠Kelley & Whatson 2013’s 10 minutes studying / 10 minutes break schedule, repeating the same material 3 times, designed to trigger LTM formation on that block of material?) But for practical purposes, this is uninteresting; to sum it up, there are many studies pointing each way, and whatever difference in efficiency exists, is minimal. Most existing software follows SuperMemo in using an expanding spacing algorithm, so it’s not worth worrying about; as Mnemosyne developer Peter Bienstman says, it’s not clear the more complex algorithms really help⁠⁠32⁠, and the Anki developers were concerned about the complexity, difficulty of reimplementing SM’s proprietary algorithms, lack of substantial gains, & larger errors SM3+ risks attempting to be more optimal. So too here.

For those interested, 3 of the studies that found fixed spacings better than expanding:

  1. Carpenter, S. K., & DeLosh, E. L. (2005). “Application of the testing and spacing effects to name learning”⁠. Applied Cognitive Psychology, 19, 619-636⁠⁠33

  2. Logan, J. M. (2004). Spaced and expanded retrieval effects in younger and older adults. Unpublished doctoral dissertation, Washington University, St. Louis, MO

    This thesis is interesting inasmuch as Logan found that young adults did considerably worse with an expanding spacing after a day.

  3. Karpicke & Roediger, 2006a

The fixed vs expanding issue aside, a list of additional generic studies finding benefits to spaced vs massed:

Generality of spacing effect

We have already seen that spaced repetition is effective on a variety of academic fields and mediums. Beyond that, spacing effects can be found in:

  • various “domains (eg. learning perceptual motor tasks or learning lists of words)”⁠⁠42 such as spatial⁠⁠43
  • “across species (eg. rats⁠, pigeons, and humans [or flies or bumblebees⁠, and sea slugs, Carew et al 1972 & Sutton et al 2002])”
  • “across age groups [infancy⁠⁠44⁠, childhood⁠⁠45⁠, adulthood⁠⁠46⁠, the elderly⁠⁠47] and individuals with different memory impairments”
  • “and across retention intervals of seconds⁠⁠48 [to days⁠⁠49] to months” (we have already seen studies using years)

The domains are limited, however. Cepeda et al 2006:

[Moss 1995⁠, reviewing 120 articles] concluded that longer ISIs facilitate learning of verbal information (eg. spelling⁠⁠50) and motor skills (eg. mirror tracing); in each case, over 80% of studies showed a distributed practice benefit. In contrast, only one third of intellectual skill (eg. math computation) studies showed a benefit from distributed practice, and half showed no effect from distributed practice.

…[Donovan and Radosevich (1999)] The largest effect sizes were seen in low rigor studies with low complexity tasks (eg. rotary pursuit, typing, and peg reversal), and retention interval failed to influence effect size. The only interaction Donovan and Radosevich examined was the interaction of ISI and task domain. It is important to note that task domain moderated the distributed practice effect; depending on task domain and lag, an increase in ISI either increased or decreased effect size. Overall, Donovan and Radosevich found that increasingly distributed practice resulted in larger effect sizes for verbal tasks like free recall, foreign language, and verbal discrimination, but these tasks also showed an inverse-U function, such that very long lags produced smaller effect sizes. In contrast, increased lags produced smaller effect sizes for skill tasks like typing, gymnastics, and music performance.

Skills like gymnastics and music performance raise an important point about the testing effect and spaced repetition: they are for the maintenance of memories or skills, they do not increase it beyond what was already learned. If one is a gifted amateur when one starts reviewing, one remains a gifted amateur. Ericsson covers what is necessary to improve and attain new expertise: deliberate practice⁠51⁠. From “The Role of Deliberate Practice”:

The view that merely engaging in a sufficient amount of practice—regardless of the structure of that practice—leads to maximal performance, has a long and contested history. In their classic studies of Morse Code operators, Bryan and Harter (1897⁠, 1899) identified plateaus in skill acquisition, when for long periods subjects seemed unable to attain further improvements. However, with extended efforts, subjects could restructure their skill to overcome plateaus…Even very experienced Morse Code operators could be encouraged to dramatically increase their performance through deliberate efforts when further improvements were required…More generally, Thorndike (1921) observed that adults perform at a level far from their maximal level even for tasks they frequently carry out. For instance, adults tend to write more slowly and illegibly than they are capable of doing…The most cited condition [for optimal learning and improvement of performance] concerns the subjects’ motivation to attend to the task and exert effort to improve their performance…The subjects should receive immediate informative feedback and knowledge of results of their performance…In the absence of adequate feedback, efficient learning is impossible and improvement only minimal even for highly motivated subjects. Hence mere repetition of an activity will not automatically lead to improvement in, especially, accuracy of performance…In contrast to play, deliberate practice is a highly structured activity, the explicit goal of which is to improve performance. Specific tasks are invented to overcome weaknesses, and performance is carefully monitored to provide cues for ways to improve it further. We claim that deliberate practice requires effort and is not inherently enjoyable.

Motor skills

It should be noted that reviews conflict on how much spaced repetition applies to motor skills; Lee & Genovese 1988 find benefits, while Adams 1987 and earlier do not. The difference may be that simple motor tasks benefit from spacing as suggested by Shea & Morgan 1979 (benefits to a randomized/​spaced schedule), while complex ones where the subject is already operating at his limits do not benefit, suggested by ⁠Wulf & Shea 2002⁠. Stambaugh 2009 mentions some divergent studies:

The contextual interference hypothesis (Shea and Morgan 1979, Battig 1966 [“Facilitation and interference” in ⁠Acquisition of skill]) predicted the blocked condition would exhibit superior performance immediately following practice (acquisition) but the random condition would perform better at delayed retention testing. This hypothesis is generally consistent in laboratory motor learning studies (eg. Lee & Magill 1983⁠, Brady 2004), but less consistent in applied studies of sports skills (with a mix of positive & negative eg. Landin & Hebert 1997⁠, Hall et al 1994, Regal 2013) and fine-motor skills (Ollis et al 2005⁠, Ste-Marie et al 2004).

Some of the positive spaced repetition studies (from Son & Simon 2012):

Perhaps even prior to the empirical work on cognitive learning and the spacing effect, the benefits of spaced study had been apparent in an array of motor learning tasks, including maze learning (Culler 1912), typewriting (Pyle 1915), archery (Lashley 1915), and javelin throwing (Murphy 1916; see Ruch 1928, for a larger review of the motor learning tasks which reap benefits from spacing; see also Moss 1996, for a more recent review of motor learning tasks). Thus, as in the cognitive literature, the study of practice distribution in the motor domain is long established (see reviews by Adams 1987; Schmidt and Lee 2005), and most interest has centered around the impact of varying the separation of learning trials of motor skills in learning and retention of practiced skills. Lee and Genovese (1988) conducted a review and meta-analysis of studies on distribution of practice, and they concluded that massing of practice tends to depress both immediate performance and learning, where learning is evaluated at some removed time from the practice period. Their main finding was, as in the cognitive literature, that learning was relatively stronger after spaced than after massed practice (although see Ammons 1988; Christina and Shea 1988; Newell et al. 1988 for criticisms of the review)…Probably the most widely cited example is Baddeley and Longman’s (1978) study concerning how optimally to teach postal workers to type. They had learners practice once a day or twice a day, and for session lengths of either 1 or 2 h at a time. The main findings were that learners took the fewest cumulative hours of practice to achieve a performance criterion in their typing when they were in the most distributed practice condition. This finding provides clear evidence for the benefits of spacing practice for enhancing learning. However, as has been pointed out (Newell et al. 1988; Lee and Wishart 2005), there is also trade-off to be considered in that the total elapsed time (number of days) between the beginning of practice and reaching criterion was substantially longer for the most spaced condition….The same basic results have been repeatedly demonstrated in the decades since (see reviews by Magill and Hall 1990; Lee and Simon 2004), and with a wide variety of motor tasks including different badminton serves (Goode and Magill 1986), rifle shooting (Boyce and Del Rey 1990), a pre-established skill, baseball batting (Hall et al. 1994), learning different logic gate configurations (Carlson et al. 1989; Carlson and Yaure 1990), for new users of automated teller machines (Jamieson and Rogers 2000), and for solving mathematical problems as might appear in a class homework (Rohrer and Taylor 2007; Le Blanc and Simon 2008; Taylor and Rohrer 2010).

In this vein, it’s interesting to note that interleaving may be helpful for tasks with a mental component as well: Hatala et al 2003⁠, ⁠Helsdingen et al 2011⁠, and according to Huang et al 2013 the rates at which Xbox Halo: Reach video game players advance in skill matches nicely predictions from distribution: players who play 4–8 matches a week advance more in skill per match, than players who play more (distributed); but advance slower per week than players who play many more matches / massed. (See also Stafford & Haasnoot 2016⁠.)

Abstraction

Another potential objection is to argue⁠⁠52 that spaced repetition inherently hinders any kind of abstract learning and thought because related materials are not being shown together - allowing for comparison and inference - but days or months apart. Ernst A. Rothkopf: “Spacing is the friend of recall, but the enemy of induction” (Kornell & Bjork 2008, p. 585). This is plausible based on some of the early studies⁠⁠53 but the 4 recent studies I know of directly examining the issue both found spaced repetition helped abstraction as well as general recall:

  1. Kornell & Bjork 2008a, “Learning concepts and categories: Is spacing the ‘enemy of induction’?” Psychological Science, 19, 585-592

  2. Vlach, H. A., Sandhofer, C. M., & Kornell, N. (2008). “The spacing effect in children’s memory and category induction”⁠. Cognition, 109, 163-167

  3. Kenney 2009. “The Spacing Effect in Inductive Learning”

  4. Kornell, N., Castel, A. D., Eich, T. S., & Bjork, R. A. (2010). “Spacing as the friend of both memory and induction in younger and older adults”⁠. Psychology and Aging, 25, 498-503

  5. Zulkiply et al 2011

  6. Vlach & Sandhofer 2012, “Distributing Learning Over Time: The Spacing Effect in Children’s Acquisition and Generalization of Science Concepts”⁠, Child Development

  7. Zulkiply 2012, ⁠“The spacing effect in inductive learning”⁠; includes:

  8. McDanie et al 2013, “Effects of Spaced versus Massed Training in Function Learning”

  9. Verkoeijen & Bouwmeester 2014, “Is spacing really the ‘friend of induction’?”

  10. Rohrer et al 2014: 1⁠, 2⁠; Rorher et al 2019: “A randomized controlled trial of interleaved mathematics practice”

  11. Vlach et al 2014, “Equal spacing and expanding schedules in children’s categorization and generalization”

  12. Gluckman et al, “Spacing Simultaneously Promotes Multiple Forms of Learning in Children’s Science Curriculum”

Review summary

To bring it all together with the gist:

  • testing is effective and comes with minimal ⁠negative factors

  • expanding spacing is roughly as good as or better than (wide) fixed intervals, but expanding is more convenient and the default

  • testing (and hence spacing) is best on intellectual, highly factual, verbal domains, but may still work in many low-level domains

  • the research favors questions which force the user to use their memory as much as possible; in descending order of preference:

    1. free recall
    2. short answers
    3. multiple-choice
    4. Cloze deletion
    5. recognition
  • the research literature is comprehensive and most questions have been answered - somewhere.

  • the most common mistakes with spaced repetition are

    1. formulating poor questions and answers
    2. assuming it will help you learn, as opposed to maintain and preserve what one already learned⁠⁠54⁠. (It’s hard to learn from cards, but if you have learned something, it’s much easier to then devise a set of flashcards that will test your weak points.)

Using it

One doesn’t need to use SuperMemo, of course; there are plenty of free alternatives. I like Mnemosyne (homepage) myself - Free⁠, packaged for Ubuntu Linux⁠, easy to use, free mobile client, long track record of development and reliability (I’ve used it since ~2008). But the SRS Anki is also popular, and has advantages in being more feature-rich and a larger & more active community (and possibly better support for East Asian language material and a better but proprietary mobile client).

OK, but what does one do with it? It’s a surprisingly difficult question, actually. It’s akin to “the tyranny of the blank page” (or blank wiki); now that I have all this power - a mechanical golem that will never forget and never let me forget whatever I chose to - what do I choose to remember?

How Much To Add

The most difficult task, beyond that of just persisting until the benefits become clear, is deciding what’s valuable enough to add in. In a 3 year period, one can expect to spend ⁠“30–40 seconds” on any given item. The long run ⁠theoretical predictions are a little hairier. Given a single item, the formula for daily time spent on it is Time = 1⁄500 × nthYear−1.5 + 1⁄30000. During our 20th year, we would spend t = 1⁄500 × 20−1.5 + 1⁄3000, or 3.557e-4 minutes a day. This is the average daily time, so to recover the annual time spent, we simply multiply by 365. Suppose we were interested in how much time a flashcard would cost us over 20 years. The average daily time changes every year (the graph looks like an exponential decay, remember), so we have to run the formula for each year and sum them all; in Haskell:

sum $ map (\year -> ((1/500 * year**(-(1.5))) + 1/30000) * 365.25) [1..20]
# 1.8291

Which evaluates to 1.8 minutes. (This may seem too small, but one doesn’t spend much time in the first year and the time drops off quickly⁠⁠55⁠.) Anki user muflax’s statistics put his per-card time at 71s, for example. But maybe Piotr Woźniak was being optimistic or we’re bad at ⁠writing flashcards⁠, so we’ll double it to 5 minutes. That’s our key rule of thumb that lets us decide what to learn and what to forget: if, over your lifetime, you will spend more than 5 minutes looking something up or will lose more than 5 minutes as a result of not knowing something, then it’s worthwhile to memorize it with spaced repetition. 5 minutes is the line that divides trivia from useful data.⁠⁠56 (There might seem to be thousands of flashcards that meet the 5 minute rule. That’s fine. Spaced repetition can accommodate dozens of thousands of cards. See the ⁠next section⁠.)

To a lesser extent, one might wonder when one is in a hurry, should one learn something with spaced repetition and with massed? How far away should the tests or deadlines be before abandoning spaced repetition? It’s hard to compare since one would need a specific regimens to compare for the crossover point, but for massed repetition, the average time after memorization at which one has a 50% chance of remembering the memorized item seems to be 3-5 days.⁠⁠57 Since there would be 2 or 3 repetitions in that period, presumably one would do better than 50% in recalling an item. 5 minutes and 5 days seems like a memorable enough rule of thumb: ‘don’t use spaced repetition if you need it sooner than 5 days or it’s worth less than 5 minutes’.

Overload

One common experience of new users to spaced repetition is to add too much stuff - trivialities and things they don’t really care about. But they soon learn the curse of Borges’s Funes the Memorious⁠. If they don’t actually want to learn the material they put in, they will soon stop doing the daily reviews - which will cause reviews to pile up, which will be further discouraging, and so they stop. At least with physical fitness there isn’t a precisely dismaying number indicating how far behind you are! But if you have too little at the beginning, you’ll have few repetitions per day, and you’ll see little benefit from the technique itself - it looks like boring flash card review.

What to add

I find one of the best uses for Mnemosyne is, besides the classic use of memorizing academic material such as geography or the periodic table or foreign vocabulary or Bible/​Koran verses or the avalanche of medical school facts, to add in words from A Word A Day⁠58 and Wiktionary⁠, memorable quotes I see⁠⁠59⁠, personal information such as birthdays (or license plates, a problem for me before), and so on. Quotidian uses, but all valuable to me. With a diversity of flashcards, I find my daily review interesting. I get all sorts of questions - now I’m trying to see whether a Haskell fragment is syntactically correct, now I’m pronouncing Korean hangul and listening to the answer, now I’m trying to find the Ukraine on a map, now I’m enjoying some A.E. Housman poetry, followed by a few quotes from ⁠LessWrong quote threads, and so on. Other people use it for many other things; one application that impresses me for its simple utility is ⁠memorizing names & faces ⁠of ⁠students although learning musical notes is also not bad.

The workload

On average, when I’m studying a new topic, I’ll add 3-20 questions a day. Combined with my particular memory, I usually review about 90 or 100 items a day (out of the total >18,300). This takes under 20 minutes, which is not too bad. (I expect the time is expanded a bit by the fact that early on, my formatting guidelines were still being developed, and I hadn’t the full panoply of categories I do now - so every so often I must stop and edit categories.)

If I haven’t been studying something recently, the exponential decaying of reviews slowly drops the daily review. For example, in March 2011, I wasn’t studying many things, so for 2011-03-24–2011-03-26, my scheduled daily reviews are 73, 83, and 74; after that, it’ll probably drop down into the 60s, and then after another week or two, into the 50s and so on until it hits the minimum plateau which will slowly shrink over years. (I haven’t gone long enough without dumping cards in to know what that might be.) By February 2012, the daily reviews are in the 40s or sometimes 50s for similar reasons, but the gradual shrinkage will continue. We can see this vividly, and we can even see a sort of analogue of the original forgetting curve, if we ask Mnemosyne 2.0 to graph the number of cards to review per day for the next year up to February 2013 (assuming no additions or missed reviews etc.):

A wildly varying but clearly decreasing graph of predicted cards per day

If Mnemosyne weren’t using spaced repetition, it would be hard to keep up with 18,300+ flashcards. But because it is using spaced repetition, keeping up is easy.

Nor is 18.3k extraordinary. Many users have decks in the 6-7k range, Mnemosyne developer Peter Bienstman has >8.5k & Patrick Kenny >27k, Hugh Chen has a 73k+ deck, and in #anki⁠, they tell me of one user who triggered bugs with his >200k deck. 200,000 may be a bit much, but for regular humans, some amount smaller seems possible - it’s interesting to compare SRS decks to the feat of memorizing Paradise Lost or to the Muslim title of ‘hafiz’⁠, one who has memorized the ~80,000 words of the Koran, or the stricter ‘hafid’, one who had memorized the Koran and 100,000 hadiths as well. Other forms of memory are still more powerful.⁠⁠60 (I suspect that spaced repetition is involved in one of the few well-documented cases of “hyperthymesia”, Jill Price: reading ⁠Wired⁠, she has ordinary fallible powers of memorization for surprise demands with no observed anatomical differences and is restricted to “her own personal history and certain categories like television and airplane crashes”; further, she is a packrat with obsessive-compulsive traits who keeps >50,000 pages of detailed diaries perhaps due to a childhood trauma & associates daily events nigh-involuntarily with past events. Marcus says the other instances of hyperthymesia resemble Price.)

When to review

When should one review? In the morning? In the evening? Any old time? The studies demonstrating the spacing effect do not control or vary the time of day, so in one sense, the answer is: it doesn’t matter - if it did matter, there would be considerable variance in how effective the effect is based on when a particular study had its subjects do their reviews.

So one reviews at whatever time is convenient. Convenience makes one more likely to stick with it, and sticking with it overpowers any temporary improvement.

If one is not satisfied with that answer, then on general considerations, one ought to review before bedtime & sleep. Memory consolidation seems to be related, and sleep is known to powerfully influence what memories enter long-term memory, strengthening memories of material learned close to bedtime and increasing creativity⁠; interrupting sleep without affecting total sleep time or quality still damages memory formation in mice⁠61⁠. So reviewing before bedtime would be best. (Other mental exercises show improvement when trained before bedtime; for example, ⁠dual n-back⁠.) One possible mechanism is that it may be that the expectancy of future reviews/​tests is enough to encourage memory consolidation during sleep; so if one reviews and goes to bed, presumably the expectancy is stronger than if one reviewed at breakfast and had an eventful day and forgot entirely about the reviewed flashcards. (See also the correlation between time of studying & GPA in Hartwig & Dunlosky 2012.) Neural growth may be related; from Stahl 2010:

Recent advances in our understanding of the neurobiology underlying normal human memory formation have revealed that learning is not an event, but rather a process that unfolds over time.16⁠,17⁠,18⁠,[Squire 2003 Fundamental Neuroscience],20 Thus, it is not surprising that learning strategies that repeat materials over time enhance their retention.20,21⁠,22⁠,23⁠,⁠24⁠,25⁠,⁠26

…Thousands of new cells are generated in this region every day, although many of these cells die within weeks of their creation.⁠31 The survival of dentate gyrus neurons has been shown to be enhanced in animals when they are placed into learning situations.16-20 Animals that learn well retain more dentate gyrus neurons than do animals that do not learn well. Furthermore, 2 weeks after testing, animals trained in discrete spaced intervals over a period of time, rather than in a single presentation or a ‘massed trial’ of the same information, remember better.16-20 The precise mechanism that links neuronal survival with learning has not yet been identified. One theory is that the hippocampal neurons that preferentially survive are the ones that are somehow activated during the learning process.16-20⁠⁠62 The distribution of learning over a period of time may be more effective in encouraging neuronal survival by allowing more time for changes in gene expression and protein synthesis that extend the life of neurons that are engaged in the learning process.

…Transferring memory from the encoding stage, which occurs during alert wakefulness, into consolidation must thus occur at a time when interference from ongoing new memory formation is reduced.17,18 One such time for this transfer is during sleep, especially during non-rapid eye movement sleep, when the hippocampus can communicate with other brain areas without interference from new experiences.⁠32⁠,⁠33⁠,34 Maybe that is why some decisions are better made after a good night’s rest and also why pulling an all-nighter, studying with sleep deprivation, may allow you to pass an exam an hour later but not remember the material a day later.

Prospects: extended flashcards

Let’s step back for a moment. What are all our flashcards, small and large, doing for us? Why do I have a pair of flashcards for the word ‘anent’ among many others? I can just look it up.

But look ups take time compared to already knowing something. (Let’s ignore the previously discussed 5 minute rule.) If we think about this abstractly in a computer science context, we might recognize it as an old concept in algorithms & optimization discussions - the space-time tradeoff⁠. We trade off lookup time against limited skull space.

Consider the sort of factual data already given as examples - we might one day need to know the average annual rainfall in Honolulu or Austin, but it would require too much space to memorize such data for all capitals. There are millions of English words, but in practice any more than 100,000 is excessive. More surprising is a sort of procedural knowledge. An extreme form of space-time tradeoffs in computers is when a computation is replaced by pre-calculated constants. We could take a math function and calculate its output for each possible input. Usually such a lookup table of input to output is really large. Think about how many entries would be in such a table for all possible integer multiplications between 1 and 1 billion. But sometimes the table is really small (like binary Boolean functions) or small (like trigonometric tables) or large but still useful (rainbow tables usually start in the gigabytes and easily reach terabytes).

Given an infinitely large lookup table, we could replace completely the skill of, say, addition or multiplication by the lookup table. No computation. The space-time tradeoff taken to the extreme of the space side of the continuum. (We could go the other way and define multiplication or addition as the slow computation which doesn’t know any specifics like the multiplication table - as if every time you wanted to add 2+2 you had to count on 4 fingers.)

So suppose we were children who wanted to learn multiplication. SRS and Mnemosyne can’t help because multiplication is not a specific factoid? The space-time tradeoff shows us that we can de-proceduralize multiplication and turn it partly into factoids. It wouldn’t be hard for us to write a quick script or macro to generate, say, 500 random cards which ask us to multiply AB by XY, and import them to Mnemosyne.⁠⁠63

After all, which is your mind going to do - get good at multiplying 2 numbers (generate on-demand), or memorize 500 different multiplication problems (memoize)? From my experience with multiple subtle variants on a card, the mind gives up after just a few and falls back on a problem-solving approach - which is exactly what one wants to exercise, in this case. Congratulations; you have done the impossible.

From a software engineering point of view, we might want to modify or improve the cards, and 500 snippets of text would be a tad hard to update. So coolest would be a ‘dynamic card’. Add a markup type like <eval src="​"> , and then Mnemosyne feeds the src argument straight into the Python interpreter, which returns a tuple of the question text and the answer text. The question text is displayed to the user as usual, the user thinks, requests the answer, and grades himself. In Anki, Javascript is supported directly by the application in HTML <script> tags (currently inline only but Anki could presumably import libraries by default), for example for kinds of ⁠syntax highlighting⁠, so any kind of dynamic card could be written that one wants.

So for multiplication, the dynamic card would get 2 random integers, print a question like x * y = ? and then print the result as the answer. Every so often you would get a new multiplication question, and as you get better at multiplication, you see it less often - exactly as you should. Still in a math vein⁠, you could generate variants on formulas or programs where one version is the correct one and the others are subtly wrong; I do this by hand with my programming flashcards (especially if I make an error doing exercises, that signals a finer point to make several flashcards on), but it can be done automatically. ⁠kpreid ⁠describes one tool of his:

I have written a program (in the form of ⁠a web page) which does a specialized form of this [generating ‘damaged formulas’]. It has a set of generators of formulas and damaged formulas, and presents you with a list containing several formulas of the same type (eg. ∫ 2x dx = x^2 + C) but with one damaged (eg. ∫ 2x dx = 2x^2 + C).

This approach generalizes to anything you can generate random problems of or have large databases of examples of. Khan Academy apparently does something like this in associating large numbers of (algorithmicly-generated?) problems with each of its little modules and tracking retention of the skill in order to decide when to do further review of that module. For example, maybe you are studying Go and are interested in learning life-and-death positions⁠. Those are things that can be generated by computer Go programs, or fetched from places like GoProblems.com⁠. For even more examples, Go is rotationally invariant - the best move remains the same regardless of which way the board is oriented and since there is no canonical direction for the board (like in chess) a good player ought to be able to play the same no matter how the board looks - so each specific example can be mirrored in 3 other ways. Or one could test one’s ability to ‘read’ a board by writing a dynamic card which takes each example board/​problem and adds some random pieces as long as some go-playing program like GNU Go says the best move hasn’t changed because of the added noise.

One could learn an awful lot of things this way. Programming languages could be learned this way - someone learning Haskell could take all the functions listed in the Prelude or his Haskell textbook, and ask QuickCheck to generate random arguments for the functions and ask the GHC interpreter ghci what the function and its arguments evaluate to. Games other than go, like chess, may work (a live example being ⁠Chess Tempo & Listudy⁠, and see the experience of ⁠Dan Schmidt⁠; or Super Smash Brothers). A fair bit of mathematics. If the dynamic card has Internet access, it can pull down fresh questions from an RSS feed or just a website; this functionality could be quite useful in a foreign language learning context with every day bringing a fresh sentence to translate or another exercise.

With some NLP software, one could write dynamic flashcards which test all sorts of things: if one confuses verbs, the program could take a template like “$PRONOUN $VERB $PARTICLE $OBJECT % {right: caresse, wrong: caresses}” which yields flashcards like “Je caresses le chat” or “Tu caresse le chat” and one would have to decide whether it was the correct conjugation. (The dynamicism here would help prevent memorizing specific sentences rather than the underlying conjugation.) In full generality, this would probably be difficult, but simpler approaches like templates may work well enough. Jack Kinsella:

I wish there were dynamic SRS decks for language learning (or other disciplines). Such decks would count the number of times you have reviewed an instance of an underlying grammatical rule or an instance of a particular piece of vocabulary, for example its singular/​plural/​third person conjugation/​dative form. These sophisticated decks would present users with fresh example sentences on every review, thereby preventing users from remembering specific answers and compelling them to learn the process of applying the grammatical rule afresh. Moreover, these decks would keep users entertained through novelty and would present users with tacit learning opportunities through rotating vocabulary used in non-essential parts of the example sentence. Such a system, with multiple-level review rotation, would not only prevent against overfit learning, but also increase the total amount of knowledge learned per minute, an efficiency I’d gladly invest in.

Even though these things seem like ‘skills’ and not ‘data’!


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK