On a side note… do you guys use a markdown editor while writing your docs? Or some method of previewing your *.md files before checking into github?
I’d like the script to be as useful as possible, so I’d like to avoid any data points that can change for unrelated reasons.
I think I’d need to get reviews for the last four months for that to work, right? I think that would have to be the case so I can catch items coming up for Burn review shortly. If that’s the case, I’d like to avoid this since it could be slow.
I think this is probably the best approach while I wait for a response on the last_review_at idea. It’s unfortunate to have to cross reference the SRS Level and Level (to handle levels 1 and 2), but it shouldn’t be a big deal.
I still think last_review_at would be the most concise way of doing what I want. But today I realized that to be 100% accurate I’d still have to have some logic of how the SRS works since the next review rounds down to the nearest hour. In this regard only, the approach of just using available_at and cross referencing the SRS intervals is better since I wouldn’t have to do any rounding. (It’s still worse because of the cross referencing itself and the hardcoded SRS intervals.)
The docs are set up in Middleman, so we can run a local server and see how they look with styles and everything. I use Textmate as my IDE, and I’ve done a bit of tweaking for markdown files to make them render with a little bit of structure.
We’ve chatted about it, and we’re going to go with @rfindley’s suggestion to add an endpoint for the SRS information. It provides reference info for other things and some updatability for everyone if the timing or names or anything change in the future. That said, I’d cache that data real good. We don’t see changing the timing on those any time soon.
We’ll probably knock that out this week. We’ll update the docs and give everyone a heads up here.
Okay, sounds good. I guess there’s one other thing to consider. Theoretically if you change the intervals and update this endpoint right away, the endpoint can only be used for projecting out for new reviews. Old reviews would (presumably) still be based on the old intervals, and I don’t see a way to figure out what the interval was when the review was done.
I’m not too worried about it since you said it won’t change any time soon, but it might be something at least worth considering if it impacts how you structure the API endpoint.
@viet @oldbonsai,
I’m currently rewriting the Timeline script for Apiv2, and had a question about review scheduling.
In the old days, reviews were scheduled on the quarter-hour. If I remember correctly, they’re now scheduled by counting from the start of the current hour, and reviews generally occur at the top of the hour, correct?
[edit: I found where you described this [here]]
But I know at some point people were saying that the first review after lessons can still end up on the quarter hour. Is that still true, or are all reviews schedule to come due on the hour?
I’m excited to start using it once the new docs are published on the official website. I’m sure they’ll be just as user-friendly as the current ones 
All reviews show up on the hour.
I double checked the code, and all reviews should show up on the hour. If someone has been on vacation for a really long time and had old reviews on the old system, they might still have an available_at that falls onto the old scheme, but anything new for lessons and reviews follows the current scheme.
@rfindley and @seanblue: We put up the SRS stage endpoint at https://api.wanikani.com/v2/srs_stages. We’ll update the docs later today or tomorrow, but the response should be pretty clear. Let us know if you’ve got any questions.
I’ve been playing around a little with the API today in order to make myself a leech sheet, and I’ve noticed that some of the subjects (for radicals at least), don’t have any characters defined. If I query the “triceratops” radical then I get a null value for characters. Is this expected?
Some of the radicals are made up and do as such not have unicode representations. I believe WK uses images for those
That makes sense. I’m currently trying to render the SVGs as nice icons and having fun with it. 
Kumirei is correct.
Radicals are primarily represented by an image due to not all of our radicals have unicode representation. If there is a unicode version, then we include it as a supplement.
In other words all radicals have images (and SVG). Some radicals have unicode.
And now if I could understand why resizing the SVGs results in some of them having blank squares rendering over parts, I’ll be happy.
I’m resizing the SVGs using CSS, so it’s probably something I’ve done there.
It looks like maybe your setup is trying to render the clipping masks with a white fill. They should only be clipping other rendered vectors, and shouldn’t actually be rendered themselves. Are you using the SVGs with CSS, or the ones without? What browser (or other tools) are you rendering with?
I’d recommend opening one of the clipped SVGs in the Elements panel, find the clipping mask(s), which should be inside a <defs> tag, and see what CSS rules are being applied to them.
Edit: What CSS are you using to resize them? You should be able to just set the width and height on the <svg> element.
@rosshendry: what @rfindley said. There should be two SVGs delivered with each radical: with and without inline styles, as indicated in the metadata object in the API response. Try the ones with the inline styles and see if that makes a difference.
I should be using the ones with stylesheets, but I’ve seen the effects of “should” on my code before! I’ll hopefully have some time to pick this up again next week, I could really do with a leech window on my desktop at all times.
For what it’s worth, I use the ones without inline styles, then just add the following CSS on the page:
svg.radical {
fill: none;
stroke: #000;
stroke-width: 68;
stroke-linecap: square;
stroke-miterlimit: 2;
}
I’ve tested that in Chrome and Firefox, and both work.
If you’re rendering at small size, you may want to reduce the stroke-width.
And to set the size:
svg.radical {
height: 40px;
width: 40px;
}
Not sure if this is intentional, but the API is available over normal HTTP as well as HTTPS. That could be a security issue when sending authorization credentials.
EDIT: I’ve also just noticed, the kanji and vocab data both have subject_component_ids listed as a property, but the actual API returns component_subject_ids