So I’m sure I’m not the only one who gets intensely frustrated by the considerable disparity between the scores we’re shown while doing reviews (the percentage score in the upper-right next to the number of items completed/to go) and the scores we’re shown when reviews are complete.
It’s far from unusual for me to see percentages in the high 80s while nearing completion of a review session, only for my actual score to be somewhere in the low 70s (or lower). Personally, I’d find it much more helpful (and potentially motivating) to see an accurate reflection of how that particular review is going, rather than being disappointed when I learn how well (or not) I actually did.
Granted, this isn’t the biggest deal, and I’m sure some people couldn’t care less, but why is this even a thing? Even KaniWani displays accurate scores and that tool is a constant exercise in mockery and frustration. Please sort it out.
During reviews, you are shown how many answers you have given correctly.
After reviews, you are shown how many items you got correct.
So if you review two items, and get the reading of one of them wrong, you will see 75% during reviews, because you’ve given a wrong answer to 1 out of 4 questions. Afterwards, you will see a score of 50% because you got 1 of 2 items wrong.
You might want different data displayed during reviews, and I agree they don’t make this difference at all clear. But you can’t bash it for being inaccurate.
To be honest, I’m not a supporter of displaying the accuracy during reviews (only after). It’s very easy to get caught obsessing about how good/bad we are doing instead of focusing on the learning itself. The accuracy post-reviews is enough for us to know how good/bad we’re doing and to adapt our studies (or to keep doing the same thing).
I usually recommend this script to remove the accuracy during reviews: