API V2 Beta Documentation

Exactly.

With both versions of the API, we’ve made a commitment to make it work like we said it would via the documentation, and we have to let everybody know if we change it in a significant way. For any other endpoint on the site, those are for our internal use and we can change those any way we’d like at any time without telling anyone.

1 Like

It’s legal since you access that endpoint through your browser all the time. :wink: I’d only ask that you play nice with volume and respect the API rate limits. It’s kind of hard to break the limits we’ve got on there through your browser, but pretty easy to do with parallel requests programmatically.

1 Like

I’m trying to get all of the failed reviews in the last X hours. Is that possible with the data currently given by APIv2?

From an item’s review_statistics, I’m thinking that I should check data_updated_at to do the X hours part of the filter. I don’t see a way to get all wrong reviews in that period, but I think I can find if the most recent review for an item was wrong. I think I could do that by checking meaning_incorrect >= 1 && meaning_current_streak == 1 (and same for reading). I think the first part is necessary to avoid including when the first review was correct, since the initial lesson doesn’t add to the streak.

Does it sound like that would work? Can anyone think of a way to make that more concise?

You can also use reviews, grab everything with the updated_after filter, then look for ones with incorrect counts that’re greater than 0. Then you don’t have to worry about streaks and all that from the statistics, plus you’re getting all the reviews for a subject that happened during the time, even if there’s more than one.

One more question. Could you add last_review_at to the /assignments endpoint? With the current data available (specifically available_at), I can easily calculate how long from now until the next review. However, what I really what to do is calculate how far (percentage-wise) that item is through the waiting period for that SRS level. For example, normally the interval for Apprentice 1 is 4 hours. If it’s 1 hour until the next review, I want to be able to calculate that this item is 75% of the way through the waiting period.

With only available_at, the best I can do is keep a copy of what I think the correct duration of every SRS level is, and cross reference that. This would also need special logic for the first two levels since they have short durations, and could potentially become inaccurate if you change any of the SRS intervals.

If, however, I had last_review_at in addition to available_at this would be much simpler, and resistant to future changes. I’d simply have to do (now - last_review_at) / (available_at - last_review_at).

I’ll chat with @viet about it, but a couple things come to mind that you could do in the meantime:

  • Take a look at the in-progress API docs repository, specifically this page about SRS intervals. Instead of speculating about those intervals, you can use the official intervals.
  • You can get the last review date from the reviews endpoint, too. The created_at for the most recent review for an assignment will give you the moment they submitted the review. It’s going to be some extra calls if you’re doing something in-browser, but local storage for their assignments might speed that up a bit.
  • You can mainly lean on on the data_updated_at field for each assignment. Unless we’re doing something on our end to update assignments, changes to that should be limited to creating the assignment, unlocking it when the component assignments are passed, starting it from lessons, or when a review is done.
1 Like

If the intervals aren’t likely to change again, the docs should suffice. If there’s a chance they’ll change, it would be nice to have an /srs_info endpoint (or whatever) that would hold that info, so apps wouldn’t break if the intervals change down the road.

2 Likes

On a side note… do you guys use a markdown editor while writing your docs? Or some method of previewing your *.md files before checking into github?

I’d like the script to be as useful as possible, so I’d like to avoid any data points that can change for unrelated reasons.


I think I’d need to get reviews for the last four months for that to work, right? I think that would have to be the case so I can catch items coming up for Burn review shortly. If that’s the case, I’d like to avoid this since it could be slow.


I think this is probably the best approach while I wait for a response on the last_review_at idea. It’s unfortunate to have to cross reference the SRS Level and Level (to handle levels 1 and 2), but it shouldn’t be a big deal.


I still think last_review_at would be the most concise way of doing what I want. But today I realized that to be 100% accurate I’d still have to have some logic of how the SRS works since the next review rounds down to the nearest hour. In this regard only, the approach of just using available_at and cross referencing the SRS intervals is better since I wouldn’t have to do any rounding. (It’s still worse because of the cross referencing itself and the hardcoded SRS intervals.)

The docs are set up in Middleman, so we can run a local server and see how they look with styles and everything. I use Textmate as my IDE, and I’ve done a bit of tweaking for markdown files to make them render with a little bit of structure.

We’ve chatted about it, and we’re going to go with @rfindley’s suggestion to add an endpoint for the SRS information. It provides reference info for other things and some updatability for everyone if the timing or names or anything change in the future. That said, I’d cache that data real good. We don’t see changing the timing on those any time soon.

We’ll probably knock that out this week. We’ll update the docs and give everyone a heads up here.

4 Likes

Okay, sounds good. I guess there’s one other thing to consider. Theoretically if you change the intervals and update this endpoint right away, the endpoint can only be used for projecting out for new reviews. Old reviews would (presumably) still be based on the old intervals, and I don’t see a way to figure out what the interval was when the review was done.

I’m not too worried about it since you said it won’t change any time soon, but it might be something at least worth considering if it impacts how you structure the API endpoint.

@viet @oldbonsai,
I’m currently rewriting the Timeline script for Apiv2, and had a question about review scheduling.

In the old days, reviews were scheduled on the quarter-hour. If I remember correctly, they’re now scheduled by counting from the start of the current hour, and reviews generally occur at the top of the hour, correct?
[edit: I found where you described this [here]]

But I know at some point people were saying that the first review after lessons can still end up on the quarter hour. Is that still true, or are all reviews schedule to come due on the hour?

I’m excited to start using it once the new docs are published on the official website. I’m sure they’ll be just as user-friendly as the current ones :smiley:

All reviews show up on the hour.

I double checked the code, and all reviews should show up on the hour. If someone has been on vacation for a really long time and had old reviews on the old system, they might still have an available_at that falls onto the old scheme, but anything new for lessons and reviews follows the current scheme.

1 Like

@rfindley and @seanblue: We put up the SRS stage endpoint at https://api.wanikani.com/v2/srs_stages. We’ll update the docs later today or tomorrow, but the response should be pretty clear. Let us know if you’ve got any questions.

4 Likes

I’ve been playing around a little with the API today in order to make myself a leech sheet, and I’ve noticed that some of the subjects (for radicals at least), don’t have any characters defined. If I query the “triceratops” radical then I get a null value for characters. Is this expected?

Some of the radicals are made up and do as such not have unicode representations. I believe WK uses images for those

1 Like

That makes sense. I’m currently trying to render the SVGs as nice icons and having fun with it. :slight_smile:

1 Like