[Userscript] The GanbarOmeter

The GanbarOmeter v4.0.9

:warning: This is a third-party script/app and is not created by the WaniKani team. By using this, you understand that it can stop working at any time or be discontinued indefinitely.


The “Speed” and “Reviews” graphs don’t work, and can’t work until the WK team decides how to handle reviews in the API. The GanBarOmeter itself works fine, but I don’t recommend installing this script at the current time. I hope to publish an update eventually.

Default theme:

Dark mode (with Breeze Dark installed)

Full installation instructions are below, but if you already have TamperMonkey installed just browse to https://github.com/wrex/ganbarometer-svelte/raw/main/published/v4/bundle.user.js and click “install”.

What is it?

This script adds three graphical elements to your dashboard. It aims to help users pace their lessons and manage their assignment queue. Tooltips and a help menu should explain everything.

Changes in v4.0.9

Changes between v4.0.4 and v4.0.9

  • many bug fixes
  • Many look and feel tweaks
  • MUCH more compatible with Breeze Dark theme now (be sure to select Dark colors in Settings → Appearance)
  • Changed both dial gauges to be zero-center with a target range indicated.
  • Support for Waterfox, Chrome, and Firefox. Unsure about Safari.
  • Use a svelte component for all number inputs and range sliders in settings dialog (instead of browser default)
  • Displays units of “qpm” (questions-per-minute), “spq” (seconds-per-question), and “rpd” (reviews-per-day) throughout.
  • Added rudimentary settings migration for future upgrades
  • Added online help and info popups.
  • Removed external validation (range sliders for everything except weights and what to quiz)

Changes between v4.0.4 and v4.0.3

  • Fix a one byte bug: “radical” not “radicals”
  • Use exact equality operator for all string comparisons (“===”). Old habits die hard.

Changes between v4.0.3 and v4.0.2

  • make it clearer color inputs are overrides (including changing the cursor to a pointer)
  • all tests work (AKA “skip borked tests!”)
  • added scripts to publish (avoids manual errors)
  • published version is now named “bundle.user.js” to let TamperMonkey install directly
  • uses questions-per-minute for speed (display and in settings) also shows s/q
  • added and check for proper version of localstorage variables (resets to defaults on incompatible changes)
  • removed disabled tzoffset setting (make your case if you need this)
  • fixed (almost?) all the validation logic errors. Binds state bidirectionally across all settings components.

There is still one minor corner-case with the validation messages, but it’s just a minor annoyance and unlikely anyone will care (involves navigating away after creating an invalid setting). I’m probably going to go to a single range slider for min/max values in a future version anyway (or triple for min/target/max).

As always, let me know if you discover anything else I’ve missed.

User Interface

User interface overview

There are three primary elements:

  • The GanbarOmeter itself. This gauge helps you decide whether to slow down or speed up doing lessons. Basically, you want to keep the needle within the light green region.

  • The Speed gauge. This gauge tells you how long you are taking on average to answer reading and meaning questions. (Note that this depends on several heuristics and statistical tricks. This information isn’t directly measured or made available in the Wanikani API.) The target range is again shown as a light green region.

  • The Reviews chart which displays a bar chart with how many reviews you performed each day, the percentage of those reviews you answered correctly the first time, your desired target range (the light-green region), and your “expected number of daily reviews” (the dashed golden line) based on the current SRS stages of items in your future assignment queue.

If you click the “Data” navigation at the top, the graphs are replaced with a tabular view of the underlying data:


In addition to the Graphs/Data views, the menu up top includes:

  • A menu item for pulling up the online help (it’s long, but there is a table of contents you can click on if you don’t want to scroll through the whole thing).

  • An icon to pull up the preference setting dialog. Settings are broken into four separate sub-sections:

  • (Optional) A launcher for @rfindley’s Self-Study Quiz with just the “new” items in your assignment queue. “New” items are in stages 1 and 2 (“Apprentice 1” and "Apprentice 2’). This icon only appears if you have the Self-Study Quiz installed. By default it will only quiz you on new Kanji, but you can choose whether to include radicals or vocabulary in the settings.

The GanbarOmeter

The first and most important graphical element is the GanbarOmeter itself. The purpose of this gauge is to tell you whether to slow down or speed up doing lessons depending on the counts, types, and SRS stages of upcoming assignments.

The GanbarOmeter display a zero-center needle with three ranges. A numeric value is calculated based on upcoming assignments. That value is compared to lower and upper limits specified by the user. Values between the limits (in the green range) display a “good” label. Below the lower limit (the yellowish range) displays a “more effort needed” label. Above the upper limit (the reddish range) displays a “take a break” label.

The Speed Gauge

The speed gauge show how long on average it takes you to answer an individual reading or meaning question. You can specify your desired range of speeds in the settings dialog.

Note that this display estimates how much time is spent on each question (reading or meaning, including repeat questions for prior incorrect answers). This is quite different than the delay between individual review records (which is what the API returns).

The Reviews Chart

Finally, the review chart shows a great deal of information:

  • The number of reviews performed each day.
  • The percentage of items answered correctly the first time (both meaning and reading).
  • The target range (the light green box in the background).
  • The expected daily number of reviews based on the makeup of the assignment queue (the horizontal dashed golden line).

If you hover your mouse over an individual reviews bar, it displays the number of review items on that day as well as how many were answered correctly the first time (both reading and meaning).

Data view

The data view shows information in tabular form.

Note that the speed gauge displays the number of sessions (consecutive strings of reviews). It uses a statistical algorithm called the median absolute deviation on the intervals between review records to find sessions.

To repeat, the speed table shows question accuracy (the percentage of individual reading or writing questions answered correctly) while the review accuracy table displays item accuracy (the percentage of review items where both the
reading and meaning were answered correctly the first time).


If @rfindley’s Self-Study Quiz is installed, a hand-drawn icon should be visible to launch it. (I tried to make it look like flash-cards). The idea is to let you do “out-of-band” reviews of newly introduced items as often as possible before doing your “proper” reviews within the SRS system.

Items in the first two stages have very short review-cycles. It doesn’t hurt to review the newest items more frequently. Once you answer them correctly enough times in the real Wanikani SRS system, they’ll move out of the earliest stages.

I highly, HIGHLY recommend installing the self-study script if, like me, you only perform reviews once per day. If so, you aren’t getting nearly enough reviews of earl-stage items, so extra, out-of-band reviews of early stage items is an excellent idea. Note that only reviewing once per day means you miss the 4-hour and 8-hour schedules for the newest items: the self-study quiz can help overcome this. You can choose in the settings dialog whether to quiz on new radicals, kanji, or vocabulary in any combination.

Once they’ve left the earliest stages, it’s best to let the SRS system figure out when you should next review an item.


Clicking the right-most icon will bring up the settings dialog:

There are separate sections for each of the different widgets. There are also sections for “advanced” settings you shouldn’t have to touch, as well as appearance settings (including where on the dashboard you want to see the Ganbarometer, and your prefered colors).


  1. Install a script manager of some sort. Tampermonkey or Violentmonkey should both work (I use Tampermonkey myself).

  2. Install the Wanikani Open Framework.

  3. Navigate to your Wanikani dashboard.

  4. Click on the tampermonkey settings (there should be an icon in your menu bar). If you happen to already have an older version of this script installed, please delete it. Open the tampermonkey dashboard and click the “Utilities” tab.

  5. Navigate to this link. Tampermonkey should then prompt you if you want to install the script. Simply click “Install”. Alternately, cut and paste this URL into the “Install from URL” field under the “Utilities” tab in the Tampermonkey dashboard. (Note that one user experienced problems with kanji characters displaying incorrectly in the script after installing from URL.)

  1. Click “Install” on the next page to actually add the script. Then navigate back to your dashboard and refresh the page. You should now see the Ganbarometer!
Other notes

Why this script exists

Developing this has been a lot of fun. I’ve learned a ton (I didn’t know Javascript and barely knew HTML/CSS when I started).

More importantly, though, I’ve found this script incredibly useful as I progress through the levels. Not to put too fine a point on it, I wrote this for myself.

The Wanikani designers have done a truly amazing job. The site teaches you to read kanji in the most efficient AND ALMOST EFFORTLESS way possible. It’s like magic.

The only requirements are that you:


  2. Do a sufficient number of lessons to maintain a “comfortable” pace while still meeting requirement 1.

In other words, you really must try to get your review queue down to zero at least once every single day. (Life happens, and it’s not the end of the world if you miss a day or two here and there, but you’ll pay for it in the end if you don’t keep up with your reviews.) Doing your reviews every day (or nearly) is non-negotiable.

Lessons though, feed the review queue. The more lessons you do, the more reviews you’ll spend time on. Lessons are the accelerator pedal, reviews are miles under the wheels, and there is no brake pedal! Once you’ve completed a lesson, you’ve launched that item into your review queue for the foreseeable future — there’s no pulling it back. Lessons you do today will have an impact on your review workload months in the future!

You’ve no choice regarding reviews, you’ve got to do them all. But it’s completely up to you to figure out how many lessons to do each day. Lessons are the only thing under your control!

Very smart people who’ve gone before me have figured out various “algorithms” to help figure out when to speed up or slow down with lessons. One common one is to keep your “Apprentice Queue” at about 100 items.

A more sophisticated algorithm is to keep “apprentice items” (stages 1-4) plus 1/10th of “guru” items (stages 5 and 6) around 150 items.

To me, though, newly introduced kanji were hardest of all. Each level would start hard and become easier. In the first several days of a new level, all those new kanji really made my reviews difficult. Once I got toward the end of a level and started seeing more vocabulary using those characters in my reviews, it got easier.

I started to slow down doing lessons immediately after leveling-up (when I started seeing more of that “evil pink”) then speed up at the end of a level when new lessons were entirely vocabulary for the kanji I’d already learned.

I wanted a dashboard display that made this mental calculation visible. Just Tarzan-logic: “new kanji hard — slow down”.

Enter the GanbarOmeter.

How I use this

I do my reviews daily (only very rarely more than once per day). This means that I “miss” the 4-hour and 8-hour review intervals for items in the first two stages. I simply don’t review items in the first two stages often enough.

So, every morning I pour a fresh cup of coffee and look at my dashboard. This only takes a second, just a simple glance.

Unsurprisingly, the default settings for the script match my own preferences.

I want the GanbarOmeter to calculate a weighted value somewhere between 130 and 170 (this is roughly my number of Apprentice items, but early kanji are weighted more heavily and I also count items in the guru stages to account for leeches). Rather than doing mental gymnastics with numbers, I just want to see the needle in the green zone, pointing almost vertical.

The speed dial is mostly to ensure I maintain a consistent pace. I don’t like to do more than 150 or so items in any individual session, and I don’t want to take more than a half-hour to forty-five minutes to do my reviews. A pace of about 6.5 seconds-per-question “feels” about right for me.

The review graph shows me how much work I’ve don’t for the past few days. If my expected number of daily reviews starts creeping up, I may decide to do fewer lessons no matter what my GanbarOmeter says.

Next, I click on the self-study button to review kanji in the first two-stages “out-of-band” (I only review kanji this way, ignoring radicals and vocabulary, because I find them the most difficult). If I don’t know an answer, I type “ke” to answer incorrectly, then hit F1 to reveal the correct answer before moving on.

At the beginning of a level, I might have 10 or more new kanji in stages 1 and 2. At the end of a level I’ll rarely have any.

Regardless, I’ll repeat the self-study quiz until I can answer all the items 100% correctly. Then I’ll hit the escape key three times in a row and start my “real” Wanikani review session.

The Wanikani review session proceeds normally. I’m NOT a fan of re-ordering script or whatever. I’ve no qualms about displaying additional information, but I’m extremely suspicious of anything that actually changes how the Wanikani SRS system actually behaves.

About the only thing I do differently than many during my reviews is to spend time on incorrect answers as I go, trying to figure out why I missed it. Many people wait until the end of their review sessions to figure out why they missed things. It’s very much personal preference.

Only after my review session do I decide whether or not to do any lessons. I navigate back to the dashboard to ensure I have the latest GanbarOmeter value displayed. If it’s in the green (or on the left side) I’ll do at least 5 lessons, if not 10, 15 or even 20 lessons.

In practice, I might think I want to do a large number of lessons because the GanbarOmeter displayed a highly left-of-center value, but after doing 5 lessons I might choose to bail if they seemed harder than usual.

Toward the end of a level, though, the vocabulary lessons often seem easy, so I might choose to do more lessons than the GanbarOmeter might seem to indicate.

In other words, the GanbarOmeter provides input to my decision making process. I don’t just follow it’s guidance automatically.

Development Notes

I developed this using the svelte compiler, using typescript for compile-time type checking, jest as a testing framework, and Testing Library for additional testing semantics. I used Lucas Shanley’s wonderful tampermonkey-svelte template to package up my code as a user script.

It uses two primary widgets: a Gauge.svelte to display a dial gauge, and BarChart.svelte to render a bar chart. Both were hand developed by me using Test Driven Development.

The basic CSS for the dial gauges came from this excellent tutorial by dcode-software. I stole the basic layout of the BarChart from this Codepen by Ion Emil Negoita.

Shout-out to Basar Buyukkahraman’s wonderful course on TDD with Svelte.

The code leverage @rfindley’s wonderful WaniKani Open Framework user script to retrieve and cache results where possible. He and @kumirei from the Wanikani community helped me get started with this user script business!

If you want to help with development or simply want to validate that nothing nefarious is included in the user script:

  1. You’ll need to enable Allow access to file URL's in the Chrome extension for tampermonkey. This is conceivably a security risk, so you may want to disable the setting again after finishing your development work. See tampermonkey-svelte for details.

  2. Download the source from the github repository.

  3. Run npm install to install all the dependencies for compilation.

  4. Before compiling or running the code, you may want to type npm run test. All tests should pass.

  5. In one shell window, type tsc -w to run the typescript compiler in watch mode.

  6. In another shell window, type npm run dev to compile a (un-minified) dev version of the code and prepare for “live” updates.

  7. Copy the top several lines of the file ./dist/bundle.js. Just copy the header itself, everything through and including the // ==/UserScript== line. Don’t copy any actual code.

  8. In the tampermonkey dashboard, click the “+” tab and paste in the headers (again, just the headers) from step 6. Save the file. This will install the ganbarometer-svelte ->dev script and prepare it for “live” updates. If you browse to the WK dashboard, and enable this version of the script, any changes you make to the source code should show up when you refresh the page.

This isn’t what I’d consider professional code: I plan a fair bit of clean-up and refactoring. Please be kind (I’m just an amateur) but any thoughts or comments are quite welcome. Hopefully, it isn’t too hard to figure out the current code organization. It’s definitely FAR better code than the previously published version of the script.


  • I still need to write more unit tests. Note that the GanbarOmeter tests won’t run unless you delete the line in the help file that prints the script version. There is nothing worse than toolchain issues that break tests, but I’ve not been able to resolve this yet.

  • Doubtless, bugs still lurk. The best way to find them is to put this out there, though. Please let me know if you discover any.

  • I’d like to disable the self-study quiz if there are no no items available. The self-study quiz itself does the right thing if there aren’t any available, but it would be nice to save a click.

  • I’m currently caching the processed data for each widget, but each time the dashboard gets refreshed it retrieves 1 to 7 days worth of reviews (and all the assignments) from the Wanikani API. I really want to start caching these raw reviews as well, but I plan to release that as a separate project. I’ll create a new version of the Ganbarometer once that cache is complete.


whispers: psst, there’s no link

This tells me I spend forever doing reviews because I get distracted xD

Workload: It’s been one and a half years since I reached level 60, so not much going on for me on wanikani anymore. (561 days on lvl 60 according to Level Duration script)


Looks great!

One thing I noticed:
When you change the settings, the panel doesn’t reload - I have to refresh the page to apply the new settings

Isn’t that the same with all dashboard scripts?
Also, not sure it can be accomplished easily as the scripts are inherently tied to the screen being rendered :thinking: (but curious to learn more!)


I don’t know if this a bug or not but i don’t remember doing a review more than 1 minute let alone 5 :face_with_raised_eyebrow:
Is there something i have to change in the settings?

1 Like

first of all, thanks for making this script! it’s a fun motivator for me to do my lessons.

One thing, may I ask how the speed’s calculated? Because even taking into account the 72-hour average, I get about 12 s/r from the Heatmap script (1), but your speedometer gives me 70 s/r.

(1) 1209 r, 4h9m (pulled from heatmap from last three days), 14940 s/1209 r gives about 12.357 s/r.

Inherently yes. But a custom redraw upon save (WKOF settings allow a callback) can update the panel without having to reload the whole page. Especially if you have multiple scripts running a reload can take a few seconds.

1 Like

I think I saw a setting that mentions the length of time between review sessions. Yep, ‘Session interval’.
Especially for people that review a few items, then handle a phone call before doing a few more reviews, a large session interval can make all that time count as ‘doing a review’.
Try shortening it to 5 or something and see what happens.

That’ll teach me to post a script just before going to bed. [Edit: not to mention replying before I’ve had my first coffee.]

I forgot that the first several people to try out a new script would be level 60s. I think this script is most useful for people between roughly levels 8 and 60, but I’m hopeful that it will eventually be useful for everyone.

Yup, that was my intended behavior which I intended to document in the first post (but cleverly forgot). I will edit the intro to mention this.

I’ll look into @rwesterhof’s suggestion of using a callback to refresh without going into an infinite loop. Currently, I load and update the settings on every refresh.

As described elsewhere, I think this is likely due to the Session interval value being too high (defaults to 10 minutes). Please set it to something like 2.5 minutes and let me know the result.

Also, could you enable debug then refresh and show me what’s printed in your JavaScript console? [Right-click, “inspect”, then click “console” in the developer window that appears.]

I first go through all the recent reviews looking for “sessions” (reviews done one after another without too much time in between each review). Each session tracks how many minutes elapsed during the session.

The reported speed is the sum of all minutes from all sessions divided by 60, divided by the total number of reviews performed across all sessions.

The default maximum between reviews is 10 minute. A lower value might be wiser (say somewhere between 2.5 and 5 minutes). I will almost certainly lower the default in a future version. (Since the default maximum speed is 30 seconds, I’m currently thinking a value of 5X to 10X the maxSpeed might be a reasonable default.)

I think I just corrected a bug in the logic, though: in v0.1, if a session only had a single review, the minutes() for that session would report as zero. This is because the API only reports the start time of a review, not how long it took to submit an answer. So start and end time of a session with only one review will be the same.

I v0.2 and later, I use maxSpeed/2 as an estimate for very short sessions (which I think can only happen with one review).

It appears both scripts agree on the number of reviews, so the difference must be in the number of seconds it took to perform those reviews. Heatmap is using a value of 14,940 seconds as you point out. GanbarOmeter must be using a value of around 84,630 seconds (23.5 hours of review time over 3 days, which makes no sense).

I’d really like to see your console output with debug enabled: The minutes() value returned for one of the sessions must be way off for some reason.

Also: 417 reviews/day over three days!! Yow. I’d be whimpering for sure. :grin:

The defaults are definitely aimed at slow-and-steady users like myself. Speed-runners will likely want to bump up some of the values. In your case, you may want to set:

  • Desired apprentice quantity to 300 or higher so the gauge doesn’t peg. You’ve currently got 296 apprentice items. The goal is to have the gauge show roughly 50% (needle straight up) for the desired difficulty level. With the defaults, someone with exactly 100 apprentice items, answering fewer than 20% of reviews incorrectly, and with no kanji in Apprentice1 or Apprentice2 would cause the gauge to display exactly 50%. With 296 apprentice items, you’re already pegging the meter without accounting for misses and new kanji.

  • Maximum reviews per day to 800 or so so. With the defaults, the gauge will show 50% at 150 reviews/day.

Can you enable debug, refresh, and send me your console output?

I should leave debug on until it’s had a bit more usage. New version coming up.

I need to look at the code rather than my phone. I’m pretty sure I have another bug in my speed calculation logic.

Version 0.5 is now posted.


  • debug enabled by default. If you had installed an earlier version, please go into settings → GanbarOmeter, click the checkbox for debug, save your settings, and refresh your browser window.

  • Sessions with Session.minutes() no longer returns zero for sessions with only a single review. The minimum session time is now maxSpeed/2 seconds.

  • Changed debug log output slightly: delimits the start and end of GanbarOmeter logs, and includes the current settings.

  • Displays the number of misses per day in the Display gauge (as well as the number of new kanji)

Only a complete idiot would copy then forget to paste the link target! :roll_eyes:

1 Like

Love it!

1 Like

my bad, I was asleep! I’m not sure what’s happened, but I’ve updated to v.0.5 and now the whole bar won’t even load; it gets stuck loading reviews, apparently (it’s been like this for quite a while)

Screen Shot 2021-09-20 at 07.14.15

And the console output (I think)

I’m not sure if this is really helpful because the script doesn’t seem to actually load, though.

Update: I’ve tried downgradiing to v0.4 and v0.1. It’s still stuck.

What other scripts do you have loaded?

Do me a favor and try disabling every other script except Wanikani Open Framework and Ganbarometer, then quit and restart the browser (not just refresh).

I’m wondering if there is a weird interaction with another script (still my problem if so, but trying to debug).

The only other hypothesis is if the start date for requesting reviews is being calculated incorrectly. At level 60 you can have a LOT of reviews (the script attempts to retrieve just the last 72 hours worth).

Let me know if none of the above works and I can send you a one off script that just logs befor attempting to retrieve the reviews.

1 Like

restarting & isolating seems to have done the trick (restarting probably did the trick, I’d already tried refreshing and isolating). It seems to still load quite slowly alone compared to other scripts, oddly.

1331 reviews in 72 hours
104.7 misses per day
1497 total minutes
3 sessions: 
     - Start: Fri Sep 17 2021 12:05:21 GMT+0700 (Indochina Time)
       End: Fri Sep 17 2021 16:52:35 GMT+0700 (Indochina Time)
       Misses: 110
       Reviews: 441
       Review minutes: 287 
     - Start: Sat Sep 18 2021 11:41:19 GMT+0700 (Indochina Time)
       End: Sat Sep 18 2021 22:01:29 GMT+0700 (Indochina Time)
       Misses: 92
       Reviews: 391
       Review minutes: 620 
     - Start: Sun Sep 19 2021 09:32:33 GMT+0700 (Indochina Time)
       End: Sun Sep 19 2021 19:22:51 GMT+0700 (Indochina Time)
       Misses: 112
       Reviews: 499
       Review minutes: 590 
296 apprentice 2 newKanji 
444 reviews per day (0 - 300 
67 seconds per review (0 - 30) 
Difficulty: 1 (0-1) 
Load: 1 
Speed: 1 

although, just glancing at the output data, I’m pretty sure I’ve never had a review session longer than 3h, let alone 10h :stuck_out_tongue:

set to 2.5 and disable all scripts except wanikani open framework

i guess i’m a little bit confused, does “review minutes” means time spent on a single item or a session, because based on the data i think it means time spent on a review session

Yeah, unfortunately that’s unavoidable. It takes a few seconds for the API to return the 1331 review items, so the gauges might not appear for a while.

I’ll try to improve a future version to render SOMETHING quickly (a “loading” message of some sort, then update the gauges once the data is retrieved from the API.

Each session comprises one or more reviews, each with a timestamp of when the review was started. The “review minutes” reported in the debug output is the difference between the timestamps of the first and last review within the session. This isn’t exactly accurate, because the timestamps are when the review of that item started — I’d really like to have the timestamp when the last review ended, but that data doesn’t exist.

The second to last session shows, for example, that you reviewed 100 items and answered all but 3 of those items correctly. The first item in that session was reviewed at 18:14 local time, and the last item was reviewed at 23:11, a span of 296 minutes.

Does that sound correct, or is the script calculating something wrong?

EDIT: Wait, clearly something isn’t being calculated correctly. With a max session interval of 2.5 minutes and 100 items, the longest possible session time would be 250 minutes. This info should help me track down the problem.

Hmm. Let me think about this and look at the code. It appears I may have a bug in the logic that finds sessions.

Were you doing reviews at both 9 am and 7pm on Sunday? The last session entry shows those as the beginning and ending timestamps. I suspect that this should be counted as multiple sessions.

Please upgrade to v1.5 which reports the settings now that it’s working again. Was this output using a Session interval of 10 minutes? Is there any chance you did at least one review every 10 minutes from 9 am to 7pm Sunday?

Bear with me while I figure this out…

1 Like

I think so, yes.


I don’t think so, given that I was in classes from 10am to 3pm…

v0.5? :slight_smile: I updated and restarted Firefox, and it’s been loading for quite some time now… (something like 10m? or maybe 13.68m)

update: 15m

I think the issue involves date calculations with different timezones. It could be affecting the review load as well.

Please disable for a bit, and I’ll post a new version shortly as soon as I figure out what I’m doing wrong with Date objects.

Thanks for reporting!

1 Like