Feature request: record / expose mistakes via API

Hi! I’ve been experimenting with using the WaniKani API to track my progress, and one thing I think would be very useful is if WaniKani recorded the actual mistakes I make. I’d like to download this data to analyze things like:

  • Find / study items that I accidentally confuse for each-other (e.g. maybe I often type the answer for item A as the response to item B and visa-versa)
  • Get a feel for how many of my mistakes are due to mixing up the kana/definition - e.g. sometimes I’ll accidentally write the hirigana for a word when it’s asking for a definition.
  • Maybe try to deduce how many mistakes are due to typos / things where I was pretty close in meaning (e.g. jumping vs leaping, or jump vs jumping)

This would require keeping track of the actual incorrect responses users enter.

Is this feasible?


No, unfortunately not. The actual answers you type are not transmitted over the API, only the number of incorrect reading/meaning answers. You can get some statistics via the review statistics endpoint and the actual review records, but the actual answers are not exposed in any way.

I understand this is not how things currently work with the review system :slight_smile:

As a professional software engineer though, this sounds completely possible. Sure, it requires protocol changes, database migrations, and possibly a version bump on the API - all of which are potentially annoying, expensive, and risky. But it’s possible.

I guess that’s what I’m asking about the feasibility of.

IMO, getting this extra data is potentially pretty valuable not just to me - but also to WaniKani as a way to improve the product. e.g. you can easily query to see what silly mistakes/typos people are making across the board and change things to either allow those, or perhaps build better review system to help guide people in the right direction - as appropriate.

As a professional software engineer then you should also understand the extra space that would be required to store the answers given for every review, which would bump up the costs for the service that WaniKani provides. The additional server costs seem like much more of an issue than any engineering work involved.

1 Like

But stress testing the system by intentionally getting a review wrong with a 1 million character answer sounds like fun. :slight_smile:


There is a script called Confusion Checker which might help you. If you make an incorrect answer it suggests what you might have been confused with.

Alternatively, you could be fastidious in noting your mistakes and use this for your own informational purposes but I don’t know how helpful that would be.

Hmm… Let me try some very conservative back-of-the-envelope math:

I’ll assume a few things, which I’m guessing are gross over-estimates:

  • Each answer is on average ~100 bytes, which maybe expands to maybe 1000 bytes of actual database storage
  • WaniKani has about ten thousand distinct quizable items
  • A typical user will enter an average of 50 incorrect answers per item, over the lifetime of the account

Looking at current Amazon RDS prices (which are known to be pretty pricey per-GB, compared to various self-hosting options), they’re charging $0.115/GB (monthy).

That means the cost of storing all the wrong answers for a paying user is… $0.0575 per month, which is less than a percent of the normal per-month fee. Even for lifetime members, it’d take 43 years of database storage costs to add up to 10% of the cost of a lifetime membership.

So… I call BS on the data cost argument.


@ zyoeru, thanks for the tip; I’ll check that out.

That’s still around 10-50x as much data per user than if they only keep track of an item’s progress, which is how I’d assume they currently have it set up. Increasing the data by a factor that large is going to impact query time/compute resources, not just storage.

Assuming cost is not an issue though, I’d guess that WK won’t implement it since it sounds like something that would only benefit the API right now. And presumably the API only exposes the data that WK keeps for its internal systems, not data that was created just for the API?

So if you wanted to have this data logged by WK, I’d recommend suggesting it in a way that it would be used by vanilla WK users. That is, in the main workflow of the WK app itself.

They started saving a bunch of data (like every review completed) when they introduced the API v2 alpha that (as far as I know) they don’t use for anything other than the API. So it’s not unprecedented.

Interesting. Maybe they’ll do it then.

Supposing I had access to an anonymized dump of this data, I’d immediately look for things like:

  • Items that people frequently confuse for each-other. With this data in hand, WaniKani could add a system where when someone answers a frequently-confused item with the frequently-confused answer, both items are automatically returned to the same level so the user can re-learn the distinction better.
  • Wrong answers for definitions where many many people enter the same wrong answer. Here, WaniKani could use this data to either add these answers to the accepted answer set (if appropriate), or perhaps to the set of answers where the answer box “shakes it head” at you (hopefully that gets the issue across). Or perhaps the frequently-wrong answer could be cleared up by adding / changing a new context sentence. Etc.

I doubt the raw storage costs are much of a deterrent for doing this. Much more significant will be the time spent in updating the service to handle and store that data, without slowing the service down much, and to build the code to query and actually do something useful with the data.

I don’t deny that having access to this data could be fun and useful. But the devs only have so much time, and time spent on this is time taken away from other service improvements. To look at this as a business decision: doing this requires both capital and operational expenditure with only very marginal added value and very little ROI. I don’t see a positive business case.

If I were calling the shots at WK, I would rather spend that time on improving the early-user experience that (I suspect) scares away a lot of potential paying customers. And considering the multi-system SRS changes that are in the pipeline, and the recent chat experiments, I suspect that is exactly what they are doing now.


I’m picturing that overflowing the buffer by so much that your next ten thousand answers will also be marked wrong. Your grandchildren will still be getting incorrect answers. :slightly_smiling_face:

1 Like