None of these charts look right, something seems significantly off as far as a measurement of workload. That or the data is poorly presented. As an analyst, completely unlabeled charts are a pretty big pet peeve of mine…
Also, I think it’s a poor assumption to think that your accuracy will get better and better to say a 0.1% failure rate at burn reviews. That’s a total fantasy IMO. Burn reviews are hard, and it’s a twofold effect.
One, it’s a test of whether you truly remember the information after a long span. Very easy for little bits of rendaku or odd on/kun combination readings to bite you. Two, while you may truly conceptually know the meaning of a kanji, remembering the specific answer WK expects is another story. You’ll find yourself putting in your own synonyms quite often.
I think I have at least average if not better than average accuracy or progression through WK, and I’d be thrilled if my burn reviews had as low as a 10% miss rate. I’m sure it’s quite a bit higher than that. The number of extra cards it drops down to the Guru level when you get them wrong absolutely makes a step change in your workload.
I understand your concern about lack of labels. I made my graphs for me but I should’ve been clearer in explain what the axes represented. The X-axis in my graphs are weeks since starting and the Y-axis is the total number of reviews and lessons per week.
So for better representation of workload I would first be trying to estimate the relative work involved in different items. Lessons are more work than reviews and kanji and vocab are more work than radicals, so that would likely be a step better.
The fail rates were an experiment to see if the overall shape was affected. I can change the numbers but it seems likely that different people learn in different ways. I could keep the fail rates at zero through apprentice perhaps to get rid of the weirdness that occurs when it runs out of new items (not that that can’t happen just it seems like a distraction) and see if how a 10% fail on the final stage affects the long tail.
It’s a model though, nothing more. I appreciate knowing that it doesn’t reflect your experience.
This is 98.68% accuracy across all items; the only reason it’s this high is because this is my third time doing levels 1-20. My first time, my accuracy was much lower.
Quoting from my WK Guide, which is the main advertiser of this graph. I wrote the imperfections of it there, so that everything was clear. I also wasn’t the one creating it btw
If anyone gets to build a better one, @ me and I’ll consider adding it to the guide
How are you measuring this? By sides of a card (as in, getting meaning right and reading wrong gives 50% accuracy) or by items (getting just 1 side wrong gives 0% accuracy? If the former, I’d say 80-90% might be a good average. If the latter… 70-80%
I’m calculating the all-around accuracy by answers I provide, not by items. So for kanji, that’s 2 answers; if I get meaning wrong but reading correct, that’s counted as 50% accuracy.
You’ll notice that I slowed down around level 15. I got into shakier territory + the number of daily reviews was getting to me. You can also tell that even though I slowed down, my workload didn’t change drastically.
Number of reviews the next day + available lessons:
ETA: You should keep in mind that I don’t do all my lessons as soon as they’re available. My brain is not big enough. (Most people on the forum don’t do them ASAP – sometime around level 5 it becomes unfeasible for a lot of people, including me.)
The overall contour is pretty close to your actually. Mostly level on a week by week sort of basis but with a sharper dip here and there. This is encouraging, thank you.
It’s going to get worse though, because as soon as I hit level 21, I’ll be in new territory and with a lot of new errors. You also have to keep in mind that the more advanced SRS stages feel more difficult, and also that the more levels and items under a person’s belt, the more leeches (items that bounce from high SRS stage to low to high to low to high to low …) they accrue.
I do think that the original graph isn’t all that wrong, however. Here’s the graph that compares my active items (apprentice to enlightened) to burnt items:
From here on now the active items part is going to get a little lower, and then probably stay the same until the end – it’ll depend on how many lessons I do and what my accuracy is like, though.
Okay, I’m done with these for now. I’m sharing all the source files here in case anyone wants to take it anywhere as well as screenshots of my final graphs and the following explanation.
I added weightings so that “activity” no longer just means reviews and lessons. Lessons are weighted twice as heavily as reviews and vocabulary and kanji are weighted twice as heavily as radicals. So those big spikes at the end are because for a few levels the new material becomes very vocabulary heavy, hitting both those weightings. Hopefully this is closer to what “workload” should actually mean. Also it smooths the start a little which was previously very high because it had disproportionately many radical reviews.
Secondly I modified it so that the user is definitely inactive (sleeping) for a consistently spaced 8 hours per day, just to be sure nothing weird was going on there. I don’t think it changed much, maybe extended the whole thing by a week.
Thirdly I’m using the progression ratios from @konekush but I’m overriding them for apprentice radicals and kanji to avoid weirdness in the overall advancement caused by insufficient learning items. I don’t think that’s necessarily realistic, just a compromise to make the graph more broadly representative. Otherwise the question will always be “why are there huge dips”.
@jprspereira I’m not sure if these graphs would be better for your guide or not. They certainly seem to require a fair amount of explanation but then so does the one you are using. But I’ll leave that up to you.
That depends on if the percentage a person gets wrong is mostly kanji, or vocab, and on at what stage of srs the fails tend to happen, since certain categories don’t necessarily affect level up, but do affect workload. Or maybe this is all no issue at all, I am no statistician, or know anything about making these sorts of graphs. But it just sounds more complicated to me.
If you want complicated, why not some manner of applet that lets you pick accuracy levels in each category on sliders or whatever, then generate the corresponding graph on the fly?
I do believe what was intended was 90% (et cetera) accuracy across the board.
Feel free to add some actual user data to this thread with
Here’s mine
It resembles the prototype nicely (obviously I haven’t hit 60 yet, so it cuts off), except for one small detail: the slope of the middle section slants the other way as leeches build up and the daily reviews increase.