Wanikani Open Framework [developer thread]

I moved the menu-insertion stuff into its own module, and added some new capabilities.

Inserting an item now looks like this:

		name: 'appstore',
		title: 'App Store',
		on_click: open_app_store

		name: 'timeln',
		title: 'Ultimate Timeline',
		submenu: 'Settings',
		on_click: open_timeln_settings

There are no pre-defined submenus. They’re created automatically as needed when you call insert_script_link().

On small screens, it looks like this:

1 Like

Sorry for the long delay between updates. Between holidays, flu bug, etc, I’ve not had as much time to work on the Open Framework as I wanted.

But I’m making good progress again. The Settings and Menu modules are finished, and the file & data caching systems are working great. When you include external files (html, css, js, json, etc), you have the option to cache the file in indexeddb (without having to understand anything about indexeddb). So, the next time you run your script, the external file will be loaded from ‘cache’ instead.

The cache can also be used as a simple substitute for localStorage, but with much larger storage space. For example, suppose you want to store some pre-processed data (like leech calculations):

wkof.file_cache.save('MyScript.calculated_data', some_object);

And load it again later:


The cache api includes methods for listing and deleting the contents of cache, or wiping the cache completely:

// Inspect the contents of the file_cache

> 'MyScript.calculated_data': {added: "1/5/2018, 12:56:08 PM", last_loaded: "1/5/2018, 12:56:08 PM"}
> 'https://www.greasyfork.org/scripts/some_script.js': {added: "1/5/2018, 12:56:03 PM", last_loaded: "1/5/2018, 12:56:03 PM"}

// Delete a specific file

// Delete everything from cache

For now, you can manually manage cache from the JS console, or from a script. Eventually, I’ll have a Cache Manager module for graphical cache management. I’ll also be adding some automated cache management that only loads once a week or so, and cleans up stuff that doesn’t appear to have been accessed in a while. That should help prevent indexeddb from filling up on a user’s machine, for example due to script upgrades over time.

I’m currently running through some iterations of the Item Data subsystem to make sure it’s as flexible as I want it to be. For example, I want to be able to:

  • Add 3rd-party data sources (e.g. missing Joyo kanji, non-WK vocab, kana-only vocab)
  • Register filters with the framework, so any script that recognizes filters will be able to use filters from another script. For example, the Quiz script will be able to choose items to quiz you on based on a Leeches filter, etc.

One thing I’m also excited about is an apikey override, where script developers can temporarily use another user’s apikey (if the user gives it to them) to be able to troubleshoot issues with the user’s specific data set. For example, if someone is having problems with Timeline, but I can’t replicate it with my own data, the user can PM their apikey, and I just plug it in as an override, and Timeline will then display their data instead of mine.

I’m debating between the following two methods of including the Open Framework from a client script:

  1. Each script that uses the framework would include a @require line in the header, pointing to a specific version of the framework, or
  2. The user installs the framework separately, and makes sure it runs before all of the client scripts (via script ordering in TamperMonkey/GM/etc).

The advantage of #1 is that the user wouldn’t have to learn how to set the script order, and wouldn’t have to install two separate scripts the first time they install a framework-based script.

The advantage of #2 is that the user would be prompted separately to update the framework (if they’ve configured TamperMonkey/etc to prompt for updates), and would make it much simpler to manage versions.

I already have some version management in place, but it gets tricky, and if a new script requires a newer version of framework than what the user already has cached, I have to prompt the user myself to approve installation of the newer framework, then refresh the page to make sure it loads. With option #2, the user can simply allow the framework to auto-update to the latest, or choose to update (via TamperMonkey prompt, if enabled) whenever they’re ready.

Does anyone foresee any problems with option #2? Any preferences for not doing it that way?

[ping: @valeth, @DaisukeJigen, @hitechbunny, @acm2010, @seanblue, @Subversity]

1 Like

The only problem I see with option #2 is if you ever make breaking changes to the framework. If the user manually installs the framework first, it’s possible that an old script will stop working when the framework auto-updates, with no easy way of getting it to work again. So if you go with this option, you might want to make sure you have some kind of versioning system (for the global framework variable name) so that the user can have multiple versions of the framework installed at the same time.

Yeah, I do have a versioning system. For the most part, the API’s interface should remain relatively stable since the function calls are either (a) very simple, or (b) use an ‘options’ object as their main argument, which allows versioning inside the object itself.

Whether I end up going with option #1 or #2, I suspect I’d end up in the same boat regarding versioning, since #1 also only allows one version to run at a time. Essentially, I’m opting to ensure that the framework remains as backward-compatible as possible (via stable contracts and careful update-planning), rather than finding a way to make multiple versions run at the same time.

#2 sounds nice, but I see lots of users complaining that scripts don’t work.
“Your stupid script doesn’t work”
“Did you install the framework?”
"The what? … Ok it’s loaded, still doesn’t work’
“Did you set the framework to load first?”
“The F?.. Fix your damn script”

Can you load js dynamically with tamper/grease monkey? I seem to recall they have some safeguard against that, but I’m not sure.
If(framework == undefined) { //load framework }
Script writers would need to throw that at top of their scripts, but that’s easy enough to explain to those that would write scripts with it anyways.

(At least) for Tampermonkey you can get the position in the scripts list via GM_info.script.position, you could check and warn.

I do think we’d need to be better about explaining script installation. I still occasionally get people who try installing without a script manager.

Or maybe redirect to an instructions thread if the user doesn’t have the framework installed:

if (!window.wkof) location.href = url_of_instructions;

Instruction thread:

As you mentioned, we can load it dynamically, but the problem is making sure you’re only loading one instance, and making sure that that instance is the newest version needed to support all of the scripts you have installed. I’m against using URLs whose contents can be modified since that’s a security risk to users, so having a ‘latest_version’ url is problematic. (That’s what TamperMonkey/etc’s upgrade notifications are good for)

In my current setup, the framework caches itself in indexeddb. If a client script requests a newer version, the user is asked if they want to upgrade. If yes, the version requested by the client script will be loaded and cached, so the cache will contain the new version. But if the user clears indexeddb, they might have to go through a series of upgrades each time, depending on how many client scripts are requesting different versions.

One other factor that I’m wondering about…
In order to keep API data up to date, there are somewhere around 7 endpoints that we’d need to query every time the user navigates to a new page. Apiv2 has an ‘updated_after’ field that makes most of those queries return quickly, but it’s still thousands of users using scripts (not sure how many simultaneously).

We could make it only check once per N minutes, but I’m sure there are cases where an immediate response would be desirable, like when people look at their timeline immediately after doing reviews.

So, I’m wondering if we’ll eventually need to see which pages a user visits, and trigger refresh of certain endpoints accordingly (like /reviews after doing reviews, and /user after visiting the profile page.)

Or, maybe @viet can add an endpoint with the data_updated_at field for all of the other endpoints, so a single query would tell us what other endpoints need to be queried.

Or maybe querying all 7 endpoints for every refresh is no big deal?? I’d hate to crash things when Ultimate Timeline suddenly goes live on Apiv2! (There are at least 5000 users of Timeline, I think).

^^^ By the way… the main point of the post above was:
If we need to monitor which pages a user visits (to know when to query endpoints), having the Open Framework as a standalone script would be necessary so it can @include itself on all of the relevant pages.

I have to do this with JQuery UI at work. if typeof Jquery.ui == undefined I add the link to the head. I suppose its possible though two scripts could both get framework == undefined, depending on how fast they execute back to back.

True, even if you just update the js, and not change the filename, caching can screw you.

I guess the instructions popup/redirect/whatever is the best bet. Although I’m sure we’ll still get complaints. But then again, complaints are usually unavoidable no matter what you do.

Edit: Actually, I seem to remember at a previous job we would link to js files in the header like normal, but would take on + “?” + new Date(). or something like that. Which would trick the reloading of the js no matter what was in cache…

Ahh… I was thinking @require in the client script header, not actual dynamic. In the script header, it would load a copy per script regardless, though my bootloader keeps the duplicate parts to a minimum.

But yeah, with actual dynamic, you’d have to create a flag somewhere to prevent async race conditions during load. Not a problem… just an extra step.

re: tricking cache…
Yeah, ?timestamp would work for tricking cache, but I want the user to make the choice to update (unless they opt out of being asked), so a security-minded user can scan the changes before accepting updates.

So, upon installing a script that uses a newer version of framework, my current code asks the user if they want to update (with links to the diff if they want to look at changes). If they say yes, the new version gets loaded dynamically and stored in indexeddb for fast fetch next time. Indexeddb would always contain the latest user-approved version of the framework. But if you clear the database, the version of framework requested by the first-to-run client script would be the one that gets fetched, and then you’d get one or more update notices again. That’s part of what prompted me to rethink this again. I’m trying to test every scenario before release.

To clarify a bit, here’s how my current code works:

Each client script adds a @require for the framework’s bootloader (at whatever framework version they happen to include). The bootloader contains a list of urls for the modules that the framework supports, and just enough code to load any requested modules.

But before it uses that URL list, it checks indexeddb to see if there’s a newer list of URLs (presumably already approved by the user). If the cached (indexeddb) version is newer, it uses those URLs. If the cached URLs are older, then the client script needs an upgrade. So, I ask the user for approval. If the user approves, the new URLs are cached, which equates to an upgrade. If the user says no, the requesting script won’t continue, though all the other scripts will run as usual, as long as they also don’t need an upgrade.

So far, it’s working fine. But if you clear cache, it clears all prior approvals for upgrade, and you’ll have to go through a sequence of re-approvals (depending on the order that the client scripts run). I might be able to work around that somehow, but it feels like it’s starting to fail the smell-test for clean code.

Keeping the bootloader as a standalone script, and putting it under Tampermonkey’s built-in diff-and-approve process, makes that a lot simpler. I just want to make sure I’m not missing anything. And, as you’ve pointed out, the slight added complexity for users may leave a bad taste… but maybe we can minimize that with some decent and consistent documentation. That remains to be seen.

I guess I’ll keep a copy of the code for version management, and give it a try as a standalone script. If it’s too much complaining and problems, we can fall back to the version-management code.

If people have auto-update enabled for their scripts, it seems like overkill to request permission to update the framework script. People that are security minded enough to disable automatic updates to check the code changes should also be capable of checking all the code being imported using @require. Maybe I’m missing something, but that’s my thoughts.

Good point.

Another benefit of standalone:
If we use @require, framework updates won’t propagate unless one of the installed client scripts changes and picks up the update. If it’s standalone, the framework can update independently. So, for example, if we improve the Settings dialog in a way that doesn’t directly affect client scripts, those changes could propagate without any clients having to rev their script.

@seanblue, @acm2010, @DaisukeJigen, @hitechbunny

I just checked out the latest GreaseMonkey on latest FF. It looks like they’ve gotten rid of the ability to set the script execution order… in fact, they’ve totally gotten rid of their “Manage scripts” section. What the heck?? Has GreaseMonkey committed suicide? (I’m aware of the breaking change in November with FF57… but I didn’t expect most of the GM interface to be scrapped, if that’s what actually happened.)

Are any of you using it? Maybe I’m just missing something. If not, I’m wondering whether to bother supporting it (until it gets fixed).

Nope, I just use Tampermonkey and Chrome.


Wanikani Open Framework

Github early access

Early access to the Open Framework. \o/


I don’t have documentation yet, but the sample_client.js should be useful in finding your way around.

To install:

  1. Copy the contents of Core.js into a new script in TamperMonkey (or whatever)
  2. Move the script to slot #1 in TamperMonkey so it will run before any client scripts that use it.
  3. Add the Update URL in TamperMonkey if you want to keep up to date during development:
    (This won’t be the permanent address. Releases will be on GreasyFork to take advantage of the ‘Install’ button, statistics, and simpler diffing)

Installing the sample_client.js:

  1. Copy the contents of sample_client.js into a new script in TamperMonkey (or whatever)
  2. Make sure the sample client is set to run after the Wanikani Open Framework script (Core.js).
  3. You probably don’t want to auto-update this one, otherwise you may overwrite any changes you make while exploring the framework.

Getting started

The main thing to know is that the framework is exposed through window.wkof. From the Javascript console, you can explore what functions and modules are loaded and available:

The modules Apiv2, Menu, and Settings are loaded by the sample_client script. If no script includes any modules, you’ll only see the functions from Core.js.


The Core.js contains:

file_cache    // Some functions for caching files in indexeddb.
    dir {}    // An object containing a list of files stored in indexeddb.
    clear()   // Clear the file_cache.
    load()    // Save a file to file_cache.
    save()    // Load a file from file_cache.
    delete()  // Delete a file from file_cache

include()     // Include a module for use with your script (Apiv2, Menu, Settings).
ready()       // Returns a promise that resolves when the specified module is ready to be used.

get_state()   // Get the current state of a state variable.
set_state()   // Set the state of a state variable.
wait_state()  // Returns a promise that resolves when a state variable reaches a specified state.

trigger()     // Initiates a framework event.
on()          // Specify a function to be called when the specified event is triggered.

load_file()   // Loads a file from cache or url.  Returns a promise that resolve with the file contents.
load_css()    // Loads a css file and installs it into the document.
load_script() // Loads a javascript file and installs it into the document.

Apiv2 module

I don’t recommend using these functions in your scripts yet. While they are useful, and you may want to explore them now, I’m currently working on a higher-level interface for managing data.

fetch_endpoint()   // Fetch an API endpoint from the WK server.  Supports filters,
                   // and automatically fetches all pages of the returned result.
get_endpoint()     // Fetches endpoint data from cache first, then retrieves only
                   // updates from the WK server.
get_apikey()       // For internal use only.  Will probably be removed.
                   // Use wkof.Apiv2.key instead.
is_valid_apikey_format()  // Makes sure a string fits the APIv2 format.
print_apikey()     // Remnant of early development.  Will probably be removed.

key (string)       // Contains the apiv2 key that the framework is currently using.
user (string)      // Contains the username that the framework is currently using.

The cached endpoint data is stored in wkof.file_cache. After fetching an endpoint, you will see the cached data appear in the cache (note that it takes about 15 seconds to fetch /subjects the first time):

If you want to use a different API key for testing:

localStorage.apiv2_key_override = 'someone_elses_key_here';

Then refresh the page. It will automatically clear cached API data (except /subjects, which is not user-specific), so you know you’ll only be getting the data you want.

Menu module

This only has one function:

insert_script_link()  // see sample_client.js for example usage

Settings module

This module only has a class constructor that returns a dialog box object:

wkof.Settings()    // see sample_client.js for example usage

More coming soon…

I’m currently working on the higher-level data interface. It consists of:

  • A set of functions for interlinking the various APIv2 endpoints
    (e.g. /subjects, /assignments, /reviews, etc)
  • A toolbox of predefined filters for narrowing down the data you want to look at.
  • An interface for registering your own filters, so other scripts can make use of them.
  • An interface for registering 3rd-party data items, like kana-only vocab, core-10k vocab, etc.

Scripts will be able to look at the registered filters and data sources, and maybe allow a user to choose what data and filters they want to use in your script… for example, the Self-Study Quiz script will allow the user to create a preset that selects only leeches, or maybe items that the user failed in review during the last 24 hours.

@valeth, @DaisukeJigen, @hitechbunny, @acm2010, @seanblue, @Subversity


Sorry for the spam, I’ve been following this thread from the beginning and finally just wanted to quickly thank you for putting in the effort and building this. I’m very much looking forward to using it when it passes the “I don’t recommend using these functions in your scripts yet.” phase.