At long last, I’m happy to release this thing to the wild. It’s … grown. It
dances, it sings, it names your offspring, it predicts the weather, it knows
what you had for breakfast …
(I may have a problem. Fear is the mind-killer …)
Full installation instructions are below, but if you already have TamperMonkey installed just browse to https://github.com/wrex/ganbarometer-svelte/raw/main/published/v4/bundle.user.js and click “install”.
This is a tampermonkey user script that (primarily) adds three graphical widgets to the Wanikani dashboard. All three provide information to help manage your workload as you progress through the WaniKani levels.
Quit reading this crazy diatribe and go install it already.
IMPORTANT The installation filename has changed to bundle.user.js. This should make it easier to install with TamperMonkey. Please delete any previous versions of this script before installing v4.0.3 (this filename will not change for any future major releases in the v4 train).
Changes in v4.0.4
- Fix a one byte bug: “radical” not “radicals”
- Use exact equality operator for all string comparisons ("==="). Old habits die hard.
- make it clearer color inputs are overrides (including changing the cursor to a pointer)
- all tests work (AKA “skip borked tests!”)
- added scripts to publish (avoids manual errors)
- published version is now named “bundle.user.js” to let TamperMonkey install directly
- uses questions-per-minute for speed (display and in settings) also shows s/q
- added and check for proper version of localstorage variables (resets to defaults on incompatible changes)
- removed disabled tzoffset setting (make your case if you need this)
- fixed (almost?) all the validation logic errors. Binds state bidirectionally across all settings components.
There is still one minor corner-case with the validation messages, but it’s just a minor annoyance and unlikely anyone will care (involves navigating away after creating an invalid setting). I’m probably going to go to a single range slider for min/max values in a future version anyway (or triple for min/target/max).
As always, let me know if you discover anything else I’ve missed.
There are three primary elements:
The GanbarOmeter itself. This gauge helps you decide whether to slow down or speed up doing lessons. Basically, you want to keep the needle within the light green region.
The Speed gauge. This gauge tells you how long you are taking on average to answer reading and meaning questions. (Note that this depends on several heuristics and statistical tricks. This information isn’t measured, or at least not
presented by the Wanikani API.)
The Reviews chart which displays a bar chart with how many reviews you performed each day (and the percentage of those reviews you answered correctly the first time).)
If you click the “Data” navigation at the top, the graphs are replaced with a tabular view of the underlying data:
The menu on top has several navigational elements:
In addition to showing graphs or data, the nav bar includes:
A slider at the top to choose the number of days worth of reviews to retrieve.
An icon to pull up the preference setting dialog.
(Optional) A launcher for @rfindley’s Self-Study Quiz with just the “new” items in your assignment queue. “New” items are in stages 1 and 2 (“Apprentice 1” and "Apprentice 2’). This icon only appears if you have the Self-Study Quiz installed. By default it will only quiz you on new Kanji, but you can choose whether to include radicals or vocabulary in the settings.
The first and most important graphical element is the GanbarOmeter itself. The purpose of this gauge is to tell you whether to slow down or speed up doing lessons depending on the counts, types, and SRS stages of upcoming assignments.
The GanbarOmeter display a zero-center needle with three ranges. A numeric value is calculated based on upcoming assignments. That value is compared to lower and upper limits specified by the user. Values between the limits (in the green range) display a “good” label. Below the lower limit (the yellowish range) displays a “more effort needed” label. Above the upper limit (the reddish range) displays a “take a break” label.
The speed gauge show how long on average it takes you to answer an individual reading or meaning question. There is a setting for the target speed, a computed value exactly equal to the target will display the green dial at the 50% location. Faster (lower seconds/question) will display in the lower part of the gauge, and slower in the higher range. Values above or below the limits specified in the target will change the label and display in a warning color.
Note that this displays how much time is spent on each question (reading or meaning, including repeat questions for prior incorrect answers). This is quite different than the delay between individual review records (which is what the API returns).
Finally, the review chart shows a great deal of information:
- The number of reviews performed each day.
- The percentage of items answered correctly the first time (both meaning and reading).
- The target range (the light green box in the background).
- The expected daily number of reviews based on the makeup of the assignment queue (the horizontal dashed golden line).
If you hover your mouse over an individual reviews bar, it displays the number of review items on that day as well as how many were answered correctly the first time (both reading and meaning).
The data view shows information in tabular form.
Note that the speed gauge displays the number of sessions (consecutive strings of reviews). It uses a statistical algorithm called the median absolute deviation on the intervals between review records to find sessions.
To repeat, the speed table shows question accuracy (the percentage of individual reading or writing questions answered correctly) while the review accuracy table displays item accuracy (the percentage of review items where both the
reading and meaning were answered correctly the first time).
You can retrieve between one and seven days of reviews.
If @rfindley’s Self-Study Quiz is installed, a hand-drawn icon should be visible to launch it. (I tried to make it look like flash-cards). The idea is to let you do “out-of-band” reviews of newly introduced items as often as possible before doing your “proper” reviews within the SRS system.
Items in the first two stages have very short review-cycles. It doesn’t hurt to review the newest items more frequently. Once you answer them correctly enough times in the real Wanikani SRS system, they’ll move out of the earliest stages.
I feel strongly that these “extra” reviews of items in the first two stages isn’t “cheating”. The whole idea of an SRS is to repeat the stuff that needs it as often as possible, and items in the first two stages absolutely should be reviewed as much as possible.
Once they’ve left the earliest stages, it’s best to let the SRS system figure out when you should next review an item.
Clicking the right-most icon will bring up the settings dialog:
There are separate sections for each of the different widgets. There are also sections for “advanced” settings you shouldn’t have to touch, as well as appearance settings (including where on the dashboard you want to see the Ganbarometer, and your prefered colors).
Install the Wanikani Open Framework.
Navigate to your Wanikani dashboard.
Click on the tampermonkey settings (there should be an icon in your menu bar). If you happen to already have an older version of this script installed, please delete it. Open the tampermonkey dashboard and click the “Utilities” tab.
At the very bottom you will see “Install from URL”. Cut and paste this URL into the box, and click “Install” (you can also just navigate to that link in your browser to install it in TamperMonkey):
- Click “Install” on the next page to actually add the script. Then navigate back to your dashboard and refresh the page. You should now see the Ganbarometer!
More importantly, though, I’ve found this script incredibly useful as I progress through the levels. Not to put too fine a point on it, I wrote this for myself.
The Wanikani designers have done a truly amazing job. The site teaches you to read kanji in the most efficient AND ALMOST EFFORTLESS way possible. It’s like magic.
The only requirements are that you:
DO ALL YOUR AVAILABLE REVIEWS EVERY SINGLE DAY, and
Do a sufficient number of lessons to maintain a “comfortable” pace while still meeting requirement 1.
In other words, you really must try to get your review queue down to zero at least once every single day. (Life happens, and it’s not the end of the world if you miss a day or two here and there, but you’ll pay for it in the end if you don’t keep up with your reviews.) Doing your reviews every day (or nearly) is non-negotiable.
Lessons though, feed the review queue. The more lessons you do, the more reviews you’ll spend time on. Lessons are the accelerator pedal, reviews are miles under the wheels, and there is no brake pedal! Once you’ve completed a lesson, you’ve launched that item into your review queue for the foreseeable future — there’s no pulling it back. Lessons you do today will have an impact on your review workload months in the future!
You’ve no choice regarding reviews, you’ve got to do them all. But it’s completely up to you to figure out how many lessons to do each day. Lessons are the only thing under your control!
Very smart people who’ve gone before me have figured out various “algorithms” to help figure out when to speed up or slow down with lessons. One common one is to keep your “Apprentice Queue” at about 100 items.
A more sophisticated algorithm is to keep “apprentice items” (stages 1-4) plus 1/10th of “guru” items (stages 5 and 6) around 150 items.
To me, though, newly introduced characters were hardest of all. Each level would start hard and become easier. In the first several days of a new level, all those new kanji really made my reviews difficult. Once I got toward the end of a level and started seeing more vocabulary using those characters in my reviews, it got easier.
I started to slow down doing lessons immediately after leveling-up (when I started seeing more of that “evil pink”) then speed up at the end of a level when new lessons were entirely vocabulary for the kanji I’d already learned.
I wanted a dashboard display that mad this mental calculation visible. Just Tarzan-logic: “new kanji hard — slow down”.
Enter the GanbarOmeter.
I do my reviews daily (only very rarely more than once per day). This means that I “miss” the 4-hour and 8-hour review intervals for items in the first two stages. I simply don’t review items in the first two stages often enough.
So, every morning I pour a fresh cup of coffee and look at my dashboard. This only takes a second, just a simple glance.
Unsurprisingly, the default settings for the script match my own preferences.
I want the GanbarOmeter to calculate a weighted value somewhere between 130 and 170 (this is roughly my number of Apprentice items, but early kanji are weighted more heavily and I also count items in the guru stages to account for leeches). Rather than doing mental gymnastics with numbers, I just want to see the needle in the green zone, pointing almost vertical.
The speed dial is mostly to ensure I maintain a consistent pace. I don’t like to do more than 150 or so items in any individual session, and I don’t want to take more than a half-hour to forty-five minutes to do my reviews. A pace of about 6.5 seconds-per-question “feels” about right for me.
The review graph shows me how much work I’ve don’t for the past few days. If my expected number of daily reviews starts creeping up, I may decide to do fewer lessons no matter what my GanbarOmeter says.
Next, I click on the self-study button to review kanji in the first two-stages “out-of-band” (I only review kanji this way, ignoring radicals and vocabulary, because I find them the most difficult). If I don’t know an answer, I type “ke” to answer incorrectly, then hit F1 to reveal the correct answer before moving on.
At the beginning of a level, I might have 10 or more new kanji in stages 1 and 2. At the end of a level I’ll rarely have any.
Regardless, I’ll repeat the self-study quiz until I can answer all the items 100% correctly. Then I’ll hit the escape key three times in a row and start my “real” Wanikani review session.
The Wanikani review session proceeds normally. I’m NOT a fan of re-ordering script or whatever. I’ve no qualms about displaying additional information, but I’m extremely suspicious of anything that actually changes how the Wanikani SRS system actually behaves.
About the only thing I do differently than many during my reviews is to spend time on incorrect answers as I go, trying to figure out why I missed it. Many people wait until the end of their review sessions to figure out why they missed things. It’s very much personal preference.
Only after my review session do I decide whether or not to do any lessons. I navigate back to the dashboard to ensure I have the latest GanbarOmeter value displayed. If it’s in the green (or on the left side) I’ll do at least 5 lessons, if not 10, 15 or even 20 lessons.
In practice, I might think I want to do a large number of lessons because the GanbarOmeter displayed a highly left-of-center value, but after doing 5 lessons I might choose to bail if they seemed harder than usual.
Toward the end of a level, though, the vocabulary lessons often seem easy, so I might choose to do more lessons than the GanbarOmeter might seem to indicate.
In other words, the GanbarOmeter provides input to my decision making process. I don’t just follow it’s guidance automatically.
I developed this using the svelte compiler, using typescript for compile-time type checking, jest as a testing framework, and Testing Library for additional testing semantics. I used Lucas Shanley’s wonderful tampermonkey-svelte template to package up my code as a user script.
It uses two primary widgets: a
Gauge.svelte to display a dial gauge, and
BarChart.svelte to render a bar chart. Both were hand developed by me using Test Driven Development.
Shout-out to Basar Buyukkahraman’s wonderful course on TDD with Svelte.
The code leverage @rfindley’s wonderful WaniKani Open Framework user script to retrieve and cache results where possible. He and @kumirei from the Wanikani community helped me get started with this user script business!
If you want to help with development or simply want to validate that nothing nefarious is included in the user script:
You’ll need to enable
Allow access to file URL'sin the Chrome extension for tampermonkey. This is conceivably a security risk, so you may want to disable the setting again after finishing your development work. See tampermonkey-svelte for details.
Download the source from the github repository.
npm installto install all the dependencies for compilation.
Before compiling or running the code, you may want to type
npm run test. All tests should pass.
In one shell window, type
tsc -wto run the typescript compiler in watch mode.
In another shell window, type
npm run devto compile a (un-minified) dev version of the code and prepare for “live” updates.
Copy the top several lines of the file
./dist/bundle.js. Just copy the header itself, everything through and including the
// ==/UserScript==line. Don’t copy any actual code.
In the tampermonkey dashboard, click the “+” tab and paste in the headers (again, just the headers) from step 6. Save the file. This will install the
ganbarometer-svelte ->devscript and prepare it for “live” updates. If you browse to the WK dashboard, and enable this version of the script, any changes you make to the source code should show up when you refresh the page.
This isn’t what I’d consider professional code: I plan a fair bit of clean-up and refactoring. Please be kind (I’m just an amateur) but any thoughts or comments are quite welcome. Hopefully, it isn’t too hard to figure out the current code organization. It’s definitely FAR better code than the previously published version of the script.
I still need to write more unit tests
Doubtless, bugs still lurk. The best way to find them is to put this out there, though. Please let me know if you discover any.
I’d like to disable the self-study quiz if there are no no items available. The self-study quiz itself does the right thing if there aren’t any available, but it would be nice to save a click.
I’m currently caching the processed data for each widget, but each time the dashboard gets refreshed I pull down 1 to 7 days worth of reviews (and all the assignments) from the Wanikani API. I really want to start caching these raw reviews as well, but I plan to release that as a separate project. I’ll create a new version of the Ganbarometer once that cache is complete.
Old v3.1 stuff
This is v3.1 of the GanbarOmeter user script.
This version of the script contains several ideas that have since been rethought. Please hold off installing this until v4 if you haven’t installed it already. The new version will be published within a week or two and contains many improvements.
I’ve tested this as well as I’m able manually, but without any automated tests there may still be bugs. Please let me know if you find any problems.
This script adds two gauges and a bar chart to your dashboard. After all, what’s a dashboard without gauges?
[If you like this script, you may also be interested in my Burns Progress user script shown at the top of the page in the screenshot.]
The gauges help you decide whether to speed up or slow down doing lessons. If the values displayed remain in the middle of the ranges, you should continue at roughly the same pace. If either turns yellow or red, or even peg at the extreme right of the gauge, you might consider slowing down. Lower values mean you might want to speed up.
The bar graph shows a pseudo-Pareto breakdown of your response time intervals (the delays between successive reviews).
Difficulty — A heuristic representing how difficult your upcoming reviews will likely be. It displays on a scale from 0 to 100%, where the middle of the scale (50%) represents “normal difficulty.” Values at the higher end of the scale indicate that you’ll likely find it hard to answer most review items correctly. Values higher than 80% will turn the gauge yellow. Higher than 90% will turn it red.
The difficulty is mostly based on the number of Apprentice items you currently have under active review, but is also weighted by the percentage of reviews you’ve been answering incorrectly, as well as the number of new kanji in stages 1 and 2.
Reviews/day — This displays how much work you’ve been doing on average each day. Unsurprisingly, it displays the number of reviews per day. Note that the script averages the reviews/day across all sessions for the past three days by default. By default, a pace of 150 reviews/day will display the gauge in the middle of its range.
Review intervals — This displays your average time between reviews in a session in seconds-per-review, as well as a breakdown of the intervals within various ranges. By default, it displays the counts in 9 different ranges:
- 0 to 10 seconds
- 10 to 20 seconds
- 20 to 30 seconds
- 30 seconds to 1 minute
- 1 minute to 1 minute 30 seconds
- 1 minute 30 seconds to 2 minutes
- 2 minutes to 5 minutes
- 5 minutes to 10 minutes
- greater than 10 minutes
Note that the Wanikani API does not measure how long it takes to answer a review item. It only tracks the start time of an individual review. These intervals display the start from one review item until the start of another. Since you normally review several items during a single session, the longer intervals (>10’) effectively represent the time between review sessions, while the shorter intervals represent the time between individual reviews.
The sum of all the counts in all “buckets” equals the total number of reviews you’ve performed over the past 72 hours (by default).
The settings menu provides control over all of the “magic numbers” used in these heuristics, but the defaults should suffice for most users.
NOTE: The Wanikani API can sometimes take a minute or two to return your recent review data. The script displays placeholder gauges and bar graphs until the data is retrieved. The server appears to cache results, however, so subsequent refreshes should happen quite quickly. Not that there is a settings option to display something quickly, immediately after loading the settings, but before loading the review information from the API.
In normal use, the WK SRS behaves as a very complex system. Its behavior depends on several things, primarily:
Whether or not you finish all the reviews that are due on a given day.
How many review items you answer incorrectly in a given session.
The make-up of your “in progress” items: those radicals, kanji, and vocabulary items that have been reviewed at least once, but haven’t yet been burned. This make-up includes:
The number of items in earlier (Apprentice) stages. The more of these, the more reviews will be due each day.
How many kanji are in the first two stages. Many people find kanji more difficult than radicals and vocabulary, especially when they’ve just been introduced and you don’t have a lot of reviews for the item under your belt. Radicals don’t have readings, and vocabulary often provides additional context, so they tend to be somewhat easier even when in early stages.
The number of lessons you perform each day. Finishing a lesson moves that item into the first stage of the SRS.
Items 1 and 2 are mostly out of your control: You really must try to do all your reviews every day if at all possible, or things can get out of hand quickly. And the percentage of incorrect answers depends on how well your memory is being trained.
Item 3 can only be indirectly controlled.
That leaves just item 4 under your direct control: how quickly you do lessons has the greatest effect on how difficult you’ll find your daily reviews!
The GanbarOmeter attempts to make it easier to know when to speed up or slow down doing lessons.
Difficulty: displayed values and explanation
The Difficulty gauge uses some heuristics to tell you how “difficult” your upcoming reviews are likely to be, based on the stages of items under active review and the percentage of reviews you’ve been answering incorrectly recently.
With the default settings and no weighting factors applied, this gauge will display the needle at the halfway point if you currently have 100 items in Apprentice stages.
The number 100 is somewhat arbitrary and based on personal preference. You may want to adjust the
Desired number of apprentice items setting to something other than 100, depending on your comfort level.
Additional weighting is applied for any kanji (not radicals or vocabulary) in stages 1 or 2.
Further weighting is applied if you’ve answered more than 20% (by default) of your daily average number of reviews incorrectly.
You can adjust the weightings with:
New kanji weighting factor (default: 0.05),
Typical percentage of items missed during reviews (default: 20), and
Extra misses weighting (default: 0.03).
New kanji weighting factor of 0.05 means that 10 kanji items in stages 1 or 2 will be 50% “heavier” than other items in the Apprentice bucket. In other words, each kanji is 5% heavier (0.05).
Extra misses weighting of 0.03 increases the overall weight of your Apprentice items. With the defaults, if you had exactly 100 items in Apprentice stages, with no kanji items in stage 1 or stage 2, and answered fewer than 20 items incorrectly, then the gauge would display in the middle of the range.
Each extra “miss” (incorrectly answered item) beyond 20 items would make the Apprentice queue 3% heavier. If you had missed 24 items, for example, instead of displaying a Difficulty of 50%, it would display 56%:
Display value = (100 apprentice items * 0.03 * 4 extra misses) / 200 items at max scale = 112 / 200 = 0.56 = 56%
Reviews/day: displayed values and explanation
This is the easiest of the gauges to understand. It simply shows the average number of reviews you are performing per day (24 hours). By default, it averages the past three days (72 hours) worth of results.
The settings variable
Running average hours allows you to change the default if you wish. It must be a value between 1 and 24, or a multiple of 24. Note that it may take a long time to retrieve reviews for very large values.
Review intervals: displayed values and explanation
The heading estimates how long on average it takes you to answer a single review item, in units of seconds per review.
Unfortunately, the Wanikani API doesn’t provide this information directly. For valid technical reasons, Wanikani only stores the start time of an individual review.
So the GanbarOmeter first gathers (by default) the past 72 hours of reviews and breaks them into “sessions” based on the following heuristic:
Consecutive reviews that are started within
Session interval minutes apart (2 minutes by default) are considered to be in the same session. Any interval longer than this starts a new session.
The total time spent on each session is the difference between the start time of the first review, and the start time of the last review within the session. Unfortunately, the timestamp of the final answer isn’t available, so session minutes are slightly undercounted (this undercounting effect is biggest for very short sessions of only a few reviews).
The average speed value displayed is the sum of the minutes from each session, divided by the total number of items reviewed by all sessions.
The bar graph breaks down all of the intervals between reviews into different “buckets”. If a review occurs within 10 seconds of the immediately preceding review, it will increase that count by 1, for example.
The bucket ranges are for intervals between:
- 0 to 10 seconds
- 10 to 20 seconds
- 20 to 30 seconds
- 30 seconds to 1 minute
- 1 minute to 1 minute 30 seconds
- 1 minute 30 seconds to 2 minutes
- 2 minutes to 5 minutes
- 5 minutes to 10 minutes
- greater than 10 minutes
Intervals to the right of the graph normally indicate delays between sessions, while intervals on the left are between individual reviews.
This is a fairly complex script involving several heuristics. I’ve only been using it for a few days, so it’s possible that further tuning will be necessary.
Despite all the complexity explained above, the GanbarOmeter is easy to use in practice. It provides a wealth of info in a fairly condensed yet still easy to understand format.
I find it useful, and I hope you do, too!
Please let me know in this thread if you do uncover any issues.
v3.1 lose it if you don’t use it
- Remove the require for review-cache (hopefully speeding up display of the meters)
- Released 9/26/2021
- Numerous bug fixes
- Pace renamed to “Reviews/day”, shows session count, and total reviews
- Difficulty gauge shows weighted items in bold
- Settings dialog cleanup (section names)
- Added settings for pareto buckets
- Added setting to make immediate loading an option
- Weighting settings now text boxes (no incrementor) with custom validator
v2.0 Two gauges and a chart walk into a bar
- Released 9/24/2021
- Uses a “pareto” (-ish) chart to display review-interval breakdown (instead of a gauge
for average seconds per review).
- Settings changes no longer require a manual refresh of the page.
- Displays gauges immediately, then updates values when WK API call returns
- Custom validator for Interval (must be between 1-24, or a multiple of 24 hours)
- Fixes bug if less than full day of reviews retrieved (
- renamed “load” to “pace”
- versioning of settings (allows invalidation of stored settings with new
- layout tweaks and cleanup
v1.0 First usable release
- Released Monday, 9/20/2021.
- Fixes a silly but major bug with Speed gauge (now interprets
Session intervalin minutes, not hours).
- Uses WKOF Settings dialog for user settings (with input validation).
- Adds more info to debug log including inter-session intervals, and pareto
of inter-review intervals.
- Attempts to handle Date objects correctly.
- Displays misses/day as well as new kanji count
in difficulty gauge.
v0.1 Initial alpha release
Released around midnight on Friday, 9/17/2021
- Add other theming besides the background color