That’s a good summary Although most of the math can be ignored (I just put it there for the curious). You can just find the row with your accuracy in the top table, and that gives you number of reviews per day or size of queue relative to your number of lessons per day. So the “formula” to use is just a multiplication (or a division to go the other way), as shown in the examples.

I don’t have time to dig into your post in detail right now, but it looks excellent for when I do have the time, thanks!

I have given it a bit of causal thought without the math. Lets say as an initial assumption you do lessons at a constant rate. Also as an initial assumption, there’s no such thing as burning items. That means you’re putting items in the system at a constant rate and they aren’t going away. You’re just pushing them farther into the future.

In that case, graphing them out into the future, I would expect to see a small constant flow incoming from 4 hours away at a constant rate - the same as the lesson rate. Those get kicked into the future a bit farther.

Eventually those start arriving, and now I have that flow again, PLUS I still have the first level flow. So now I have double the number of reviews per day. Those get kicked out even farther in the future.

Repeat for each SRS stage. Now at steady state max, wouldn’t I have 9 times the reviews per day as my daily lesson rate? That’s a max, since burned ones go out of the system, as you noted, at the same rate - the exact same rate you’re putting them in.

Failing an item should give you a 1-lesson bump in an earlier flow, but later a 1-lesson dip in the later one it would have been in. On average, no big difference, but the system would be one item fat from the time it *should* have been burned to the time it *does* get burned.

So I would expect a rule of thumb to be - “your max work load is going to be 9 x your lesson rate + fails.” No?

Looking at your table, at 100% accuracy you have 4 reviews per day. I mean, math doesn’t lie, but that doesn’t match my intuitive sense of what’s happening.

Edit: my bad; I’m reading the wrong column in your table. You have 8, and I bet the difference is I’m counting the burned level as 9 when I shouldn’t. So it does check with the gut.

Later edit, sorry this is getting long: So here’s a math question for you -

Would it be a net win or loss in terms of time to burn if you could shorten the intervals until you were failing (on average) a certain percent below 100%? Like, if I could cut every interval by 10% I would go from lesson to burn faster. But, if that meant I also got a 90% accuracy instead of 100, would I win or lose?

It’s very good to get a “gut understanding” of a model, and realize it checks out with the solved math. Kudos on getting the right intuition! WIth perfect accuracy, the review rate is indeed exactly 8 times the lesson rate: one per level.

About your question: the burn time for an item was noted `T`

up there, and it’s equal to the average number of non-burned items `Qt`

divided by the lesson rate λ. `Qt`

is simply the sum of the individual average queue sizes `q1`

to `q8`

and you might notice that each of these is equal to the review rate of that level multiplied by the SRS delay for that level. If you multiplied every SRS delay by a constant, say `k`

(for instance 90%), it would factor out, and `Qt`

become `k*Qt`

, and finally the burn time itself becomes `k*T`

.

So that’s simple and proportional.

Dropping to 90% accuracy however brings you from 175 days to 204 days to burn. Reducing the SRS delays by 90% also reduces the burn time by 90%: 0.9 * 204 = 183.6 days. In this case you already lose.

Assuming the drop in SRS times is equal to the drop in accuracy, we are basically expressing `k*T(k)`

:

The burn time still grows drastically as the accuracy decreases, even with the matching reduction of SRS times. So it’s a net loss. This compensation is not enough, and it’s always better to have perfect accuracy.

To continue with the thought: what SRS reduction time would be necessary to keep the burn time untouched, even with a given reduced accuracy? We’re simply solving for a function `F`

in `F(s) * T(s) = T(1)`

(we want an SRS time reduction factor that keeps the burn time equal to the one for perfect accuracy). That gives `F(s) = T(1)/T(s)`

:

For instance at 90% accuracy, the SRS times would need to be 85% of normal in order to compensate. At 75% accuracy, it’s already down to 63%, etc.

So that could work. However, a bit of intuition tells me that in reality, your accuracy might *depend* on the SRS delays, and reducing them might break your ability to remember the items, therefore reducing your accuracy, and so on

Oh, hey, you are using Maxima?

Noice

Sorry, I do not have anything to contribute to the conversation. I will now retreat to the shadows

Yup. It’s pretty nice! Unfortunately I don’t use it often enough, so every time I need to look up the documentation for even basic functions, but heh. Still saves me hours of tedious hand solving

I know the feeling The worst is when you are trying to show something to someone next to you. Makes me feel like

Anyway, sorry again to derail the thread. Keep up the good work.

I have no idea what all that was about but I liked it anyway for the effort.

How’d you make that table in the comment?

Quite an effort has went into this topic! Thank you for that!

As far as I can see, in order for me to achieve my “max workload, or efficiency”, I should do roughly 200 reviews per day - that’s actually double to my current pace! How disheartening!

Maybe I’ll set that as my daily goal for at least those days, that I’m not simply rolling around the room trying to make sense of a mountain of research papers… ^^;

this post made me realise I’ve done basically zero maths since I got my degree 4 years ago

@Tenoch hi friend, my stats is as followed

I get an average of about 80 percent correct every session. What should be my right pace? I have about 30mins that I can afford to do reviews every day (=200 reviews at max).

Based on the table in the first post, that means roughly 13 lessons per day to get <200 reviews per day at 80% accuracy.

(Nb: the table was generated in a way that is a bit wrong, and I don’t know if it has been corrected yet)

This is awesome @Tenoch! Huge thanks!

No, it hasn’t, but I’m now making a web tool to make proper computations based on your actual review accuracy. Ooooh boy. That escalated somewhat quickly

OK folks! You spake, and I listened! Introducing the **Wanikani accuracy and review pacing** webtool, at this address: Wanikani Statistics ! What a mouthful.

Basically, just feed in your v2 API key, and it will use your past reviews to compute your accuracy per level (including various levels drops depending on level and number of wrong answers). Then you just need to provide your lesson rate, and it will do All The Math ™.

Enjoy!

(If anyone has CSS skills, please feel free to make this pretty, I’ll happily apply your changes…)

Oh and of course I could only test it with my own progress, so there might be bugs here and there. Please feel free to ask questions or share if you see something odd!

Checking the average number of non burned items with my actual values and it looks pretty accurate

(Except for A1 and A2, because I obviously never have the average in there; pretty much all or nothing)

I added a `key`

parameter to the URL, to prefill the key and preload the data. Can bookmark the whole thing now without having to copy paste:

`https://castux.github.io/wanikani-stats/?key=your_key_here`

It’s way overestimating Enlightened for me. I put in the lessons per day of 12 (since I only increased it to 16 a few weeks ago), but it’s still overestimating Enlightened by 600 or about 45%. The others are relatively close, so the total is also overestimated by about that amount.

@Tenoch This calculation assumes 6-ish months of consistent lessons right? I took a break from lessons late April to early May, so that could explain the discrepancy.

Yes, the model assumes “steady state”, that is items are flowing through the whole system consistently. In practice that should mean that you’ve had a relatively constant lesson rate, and already started burning items.

Of course, I may have some values wrong. These are the SRS delays I used (in days):

```
levelDelays =
[ 4.0 / 24.0
, 8.0 / 24.0
, 1.0
, 2.0
, 7.0
, 14.0
, 30.0
, 120.0
]
```

The intervals reported by the `/srs_stages`

endpoint (in seconds):

```
Stage Interval Accelerated Interval
----- -------- --------------------
0 (L) 0 0
1 (A1) 14400 7200
2 (A2) 28800 14400
3 (A3) 82800 28800
4 (A4) 169200 82800
5 (G1) 601200 601200
6 (G2) 1206000 1206000
7 (M) 2588400 2588400
8 (E) 10364400 10364400
9 (B) 0 0
```

Everything >= 24h is shortened by 1hr.

- 24 is shortened to 23hr
- …
- 120d is shortened to 119d + 23h