We’ll take a look at it in the morning. Sorry about that.
Edit: Our morning is within eight hours of this timestamp.
We’ll take a look at it in the morning. Sorry about that.
Edit: Our morning is within eight hours of this timestamp.
@hitechbunny,
I noticed in your output from the first query (chronologically) that there’s a ‘?’ missing before ‘page_after_id’ in the next_url field. Is that a typo, or is it really missing from the next_url?
Glad it is working. It was a hard choice to drop easy paralleization, but when this is the result…
Great job @oldbonsai
Yeah, we know it’s quite the tradeoff, especially when folks are trying to get a bunch of their records at once, as is the case with cache misses/initial loads. 
We’re keeping an eye on our server side performance, and it’s looking much happier after the pagination change. We’re going to watch it for a little while, and if it stays happy, we’ll bump the per-page count up to 500 and see what happens. Hopefully it keeps cruising, then the number of serialized calls can be reduced.’
Any other thoughts on v2? We’re getting closer and closer to beta. 
Interesting, i was wondering about throwing together an ios app (since the existing one does not fit my taste).
Do you guys have any sdk in swift? Or would you like to have one?
That would make a great opensource project.
Pagination fix is now live.
Thanks for the alpha documentation. Sorry if the answer is obvious, but I’m in the process of translating my script to the new API and I cannot find how to get the next time items from the current level will show up in my queue.
I recognize that summary can provide some of the relevant information, but will give me information about the next 24 hours only. Is that by design, or did I miss something?
I guess that with “data_updated_at” for a given card and is “srs_stage” I could compute an ETA but that seems very cumbersome.
Thanks!
This is how you get the information you are looking for
/assignments endpoint with the levels filter. For example /assignments?levels=5 if the user’s current level is 5.data you have a collection of assignment objects belonging to your current level. Within those objects the key of interest is available_at. This is an ISO8601 timestamp of when the associated subject will appear in the user’s review queue.Note if the timestamp is null, it is because the associated subject is sitting in the lesson queue or it has been burned (in other words srs_stage is 0 or 9). You can programmatically filter out objects with srs_stage equaling 0 or 9 if you just want items in the review state. Alternatively, you can tack on the srs_stages filter to only return review state items, for example /assignments?levels=5&srs_stages=1,2,3,4,5,6,7,8.
See headings GET /assignments and GET /assignments/:id for more detailed information.
The /summary endpoint is just a reporting endpoint for what is available “now” in the user’s lessons and reviews (+ any upcoming reviews in the next 24 hours broken down to every hour).
A couple quick updates:
We have just released an update which adds an ids filter to all collection endpoints.
So here is the game plan for the API v2:
When a user’s subscription lapses, do all of the endpoints only dispense data for the free levels?
Any thoughts on supporting up to the level that the user reached instead?
(My subtext for asking is that my subscription ends soon, and I’m wondering about supporting userscripts. In hindsight, a lifetime subcription would have been helpful for that sole purpose, but I didn’t even know what userscripts were when I first subscribed to WK
)
We are planning to make all content accessible (some restriction on the subjects though, per terms and conditions we are still drafting up), regardless of the user’s subscription state. I believe thats how the API v2 is set up right now.
If anyone else is trying this in PowerShell, here’s a snippet I used:
$Headers = @{ "Authorization" = "Token token=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX" }
$User = Invoke-RestMethod -Headers $Headers "https://www.wanikani.com/api/v2/user"
$User.data
username : doncr
level : 38
profile_url : https://www.wanikani.com/users/doncr
started_at : 2014-06-28T09:56:56.716474Z
subscribed : True
current_vacation_started_at :
New to Authorization Token (and also new to Javascript.) Any easy way to put in subject_id and return the real character?
var v2key = 'XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX';
var url = 'https://www.wanikani.com/api/v2/';
var path = 'assignments';
var id = '?srs_stages=9';
$.ajax({ // ajax call starts
url: url + path + id,
dataType: 'json', // Choosing a JSON datatype
method: 'GET',
headers : { 'Authorization' : 'Token token={' + v2key + '}' },
})
.done(function(data) {
console.log(data);
debugger;
for(i=0; i< data.data.length; i++){
id = '/' + data.data[i].data.subject_id;
path = 'subjects';
$.ajax({ // ajax call starts
url: url + path + id,
dataType: 'json', // Choosing a JSON datatype
method: 'GET',
headers : { 'Authorization' : 'Token token={' + v2key + '}' },
})
.done(function(data2) {
console.log(data2.data.character);
//$('col-md-9').append(JSON.stringify(data, undefined, 2) + '<br>');
}).fail(function(err) {
console.error(err);
});
}
}).fail(function(err) {
alert(err);
});
It seems like if I request too much, there is error, and the request stops working.
Rather, I might even ask, what knowledge is required?
Can you tell me what exactly you are trying to accomplish?
To me it looks like you are doing the following:
If this is the case, I can go over what I would do.
But before we start I want to address the 403s you are getting.
The reason you are getting 403 is because you are hitting the rate limiter we have in place (10 requests per second and 60 requests per minute). The code you are using is doing a request for each assignment to retrieve the subject. If you have a lot of burned items (can be in the multiple 1000s given your level), you are going to exceed the rate limit really quickly.
Anyway, back to achieving your goal.
I am going to demonstrate two ways to do it. The first way I will use your existing code and alter it so it is more efficient. The second way is how I would personally write out the code to achieve your goal.
var apiKey = 'XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX',
url = 'https://www.wanikani.com/api/v2/',
endpoint = 'assignments',
parameters = '?srs_stages=9';
$.ajax({
url: url + endpoint + parameters,
dataType: 'json',
method: 'GET',
headers : { 'Authorization' : 'Bearer ' + apiKey },
}).done(function(responseBody) {
var subject_ids = responseBody.data.map(a => a.data.subject_id).join(),
endpoint2 = 'subjects',
parameters2 = '?ids=' + subject_ids;
$.ajax({
url: url + endpoint2 + parameters2,
dataType: 'json',
method: 'GET',
headers : { 'Authorization' : 'Bearer ' + apiKey },
}).done(function(responseBody2) {
var subject_characters = responseBody2.data.map(s => s.data.characters).join(', ');
console.log(subject_characters);
}).fail(function(error) {
alert(error);
});
}).fail(function(error) {
alert(error);
});
var apiBaseUrl = 'https://www.wanikani.com/api/v2/';
var apiToken = 'XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX';
var requestHeaders =
new Headers({
'Wanikani-Revision': '20170710',
Authorization: 'Bearer ' + apiToken,
});
var assignmentsApiEndpointPath = 'assignments';
var assignmentsFilters = '?srs_stages=9';
var assignmentsApiEndpoint =
new Request(apiBaseUrl + assignmentsApiEndpointPath + assignmentsFilters, {
method: 'GET',
headers: requestHeaders
});
fetch(assignmentsApiEndpoint)
.then(function(response) { return response.json(); })
.then(function(responseBody) { return responseBody.data.map(a => a.data.subject_id).join(); })
.then(function(subject_ids) {
var subjectsApiEndpointPath = 'subjects';
var subjectsFilters = '?ids=' + subject_ids;
var subjectsApiEndpoint =
new Request(apiBaseUrl + subjectsApiEndpointPath + subjectsFilters, {
method: 'GET',
headers: requestHeaders
});
fetch(subjectsApiEndpoint)
.then(function(response) { return response.json(); })
.then(function(responseBody) { console.log(responseBody.data.map(s => s.data.characters).join(', ')); });
});
The key thing in both examples: only two requests are made to the API. This keeps you under the rate limit.
What I did is collect all the subject_ids from the assignments response, join them in a comma-delimited string, and leverage the ids filter on the subjects endpoint to get the subjects related to the assignments.
If you are going to be hitting the API a lot to look for subjects, it is best to cache the subjects locally and do a find on the cache rather that hitting the API every time you want the subject data.
Ideally, you would know a programming language. But if you rather not program, I suggest using something like Insomnia, which is free and excellent.
Thanks, but
Anyway, I’ll try to solve it myself, based on this info
BTW, I think the answer is localStorage / jStorage.
I think the parameters being passed is too large… Going to see if I can replicate and offer an alternative solution.
Also, best way to deal with pagination of more than 1,000 items? Best way to make use of responseBody.pages.next_url?
I am trying to detect leeches amongst burned items, i.e. calculate leech score; and resurrect according to leechness (probably mass resurrection via Burn Manager).
I figured that even 100-parameters are too large; however, 50 is acceptable.
I have a plan; still, I have yet to learn how to wait for Request-Per-Minute. Also, how to automate pagination.
My plan is
So, I’ll need to deal with 25k items. I have to tell the program to learn to wait.
Now, I have a problem with over-caching, more than 5 MB of localStorage… (especially for the statistics alone.)