API V2 Alpha Documentation


Each review object is the state of the single card after being successfully answered.

For example, you do a review for kanji 金. You answered meaning correct, but answered the reading incorrect twice before answering it correctly. This result gets recorded as a new review object. Every subsequent review of this kanji will be recorded as a separate review object.

If you are looking for the last “failed” review for a particular subject you’ll want to use the subject_ids (or assignment_ids) filter on the /reviews endpoint. Order the payload by descending data.created_at and select the first data.incorrect_meaning_answers != 0 || data.incorrect_reading_answers !=0


Thanks this got me on the right track!

I wanted a script to generate me an export of everything I failed in the past 7 days.

If anyone is interested you can find my script here: https://github.com/pamput/wanikani-tools/tree/master/recent_fail


I’m trying to do a really simple thing - export a list of kanji known to the user (that is, srs level 1 or greater). I’ve played around with the API a bit, but can’t seem to find a way of achieving this. The best I could do was:


But that doesn’t give me directly the kanji, I’d have to fetch the subjects in a second call, using the ids as arguments.

Another thing I tried is: https://api.wanikani.com/v2/subjects?types=kanji&levels=1,2,3… giving the levels up until the user’s level. That would include the kanji, although non-unlocked ones for the current level as well, and buried pretty deep in a huge amount of JSON.

Is there a simpler way I’ve missed?


Since I’m not sure if you’re developing an app or just wanting to fetch your own list, here are three options:

Just using API fetches, there’s no way around having to fetch the whole /subjects endpoint and cross-referencing the subject_id of each result from your first query above.

But, if you’re using a browser, the Wanikani Open Framework makes this a lot easier since it does the fetches and cross-linking for you. If you have the framework installed, go to WK dashboard, and paste the block below into your Javascript console:


function fetch_items() {
 return wkof.ItemData.get_items('assignments');

function process_items(items) {
  var kanji = items.filter((item) => item.object === 'kanji');
  var learned = kanji.filter((item) => {
    return (
      item.assignments &&
      item.assignments.unlocked_at &&
      item.assignments.srs_stage > 0
  var characters = learned.map((item) => item.data.slug);

(edit: now that I’m at my computer, I was able to check the code above, and fixed a missing parenthesis)

Also, non-coders can go to https://www.wkstats.com/#items.wk.kan and click on the “Not Learned” button to hide non-learned items.


Just curious, but is there (or are there plans to) publish a test user or a test server instance? I’m starting to write some automated tests for KameSame, and short of signing up a trial user for WK I don’t have a lot of great options for fetching a user in a known state.

(I could of course build fixtures for all the API responses I need, but it’d be nice to have at least one test suite that goes end-to-end to WK’s servers)


There’s nothing publicly available. No idea what they do internally.

I use a trial account for some of my testing, but most of what I do doesn’t need to be static.


I am currently just mocking out responses I need. Its not too cumbersome


@viet @oldbonsai When you have time during business hours, can you tell me if there is an easy way to get an aggregate value across all reviews of the percentages that show up on the review summary page after a review session is completed? Right now the only easy stat I know we can get is the percentage from answers during a review session, which is not nearly as useful.

[Preview]: Upcoming wkstats.com changes (Have you reset your level?)

If you know when the review has started, then you can use the timestamp with the updated_after filter on /review_statistics. This will return all review statistics information belong to subjects reviewed after the update_after timestamp. This works on the assumption review_statistic objects only get updated after a subject’s review is submitted, which is true right now.

The proper way would be to use the updated_after filter on /reviews, collect the subject_id, and then hit up /review_statistics with the subject_ids filter.

The former is one query versus the latter’s two queries.

Let me know if I misunderstand your question.


I think I got it, thanks!

CC @rfindley (though I assume you knew most of that already)