TL;DR:
- All the existing API keys are not getting changed and will keep working, so no one panic.
- We added some writable endpoints to API v2.
- We’re adding the ability to have multiple API keys with different permissions.
- Scrapers out there need to update their code.
We recently added PUT
and POST
endpoints for updating user preferences, finishing lessons, answering reviews, and creating study materials (notes and synonyms). Documentation on all of them are in the works or already live — take a look at the documentation, the documentation repo, or its open PRs.
These additions call for some kind of permissions, since not every tool should have access to write to your account. Permissions are in place right now; the defaults allow all reads and deny write access, working just like before. “But, that means I can’t use those endpoints!”, you say?
Well, we’re allowing everyone to have multiple API keys (a.k.a. personal access tokens) and set different permissions on each one, including write permissions to those fun new endpoints. The tokens can be named and expired, making it easier to say who has access and also take that access away.
To manage all that, we’re moving tokens into their own section of the user settings. We’re putting the finishing touches on the UI, but the changes will launch March 18 during the morning Pacific time. The new interface will be at WaniKani — Log in. The API v1 key lives in a similar spot on the page with a similar interface as before (a read-only input field). The API v2 keys, though, work way differently, appearing in a list, having names, and generally being fancier.
That means all you fine people that scrape API keys off the current interface are looking at a breaking change. We discourage scraping for the API keys since it bypasses some user consent and is subject to breaking changes without announcement, but we know it’s done. If you have to scrape those keys, this is your chance to fix up your process.
PS. We’re working on making WaniKani an OAuth provider. We’ll post more information as we get closer to launching this, but we’re hoping that it solves the scraping problem for those applications out there that sacrifice enough turtles to the Crabigator would benefit from OAuth.