[Userscript] The GanbarOmeter

The GanbarOmeter v4.0.4

At long last, I’m happy to release this thing to the wild. It’s … grown. It
dances, it sings, it names your offspring, it predicts the weather, it knows
what you had for breakfast …

(I may have a problem. Fear is the mind-killer …)

Full installation instructions are below, but if you already have TamperMonkey installed just browse to https://github.com/wrex/ganbarometer-svelte/raw/main/published/v4/bundle.user.js and click “install”.

What is it?

This is a tampermonkey user script that (primarily) adds three graphical widgets to the Wanikani dashboard. All three provide information to help manage your workload as you progress through the WaniKani levels.

Quit reading this crazy diatribe and go install it already.

IMPORTANT The installation filename has changed to bundle.user.js. This should make it easier to install with TamperMonkey. Please delete any previous versions of this script before installing v4.0.3 (this filename will not change for any future major releases in the v4 train).

Changes in v4.0.4

Changes between v4.0.4 and v4.0.3

  • Fix a one byte bug: “radical” not “radicals”
  • Use exact equality operator for all string comparisons ("==="). Old habits die hard.

Changes between v4.0.3 and v4.0.2

  • make it clearer color inputs are overrides (including changing the cursor to a pointer)
  • all tests work (AKA “skip borked tests!”)
  • added scripts to publish (avoids manual errors)
  • published version is now named “bundle.user.js” to let TamperMonkey install directly
  • uses questions-per-minute for speed (display and in settings) also shows s/q
  • added and check for proper version of localstorage variables (resets to defaults on incompatible changes)
  • removed disabled tzoffset setting (make your case if you need this)
  • fixed (almost?) all the validation logic errors. Binds state bidirectionally across all settings components.

There is still one minor corner-case with the validation messages, but it’s just a minor annoyance and unlikely anyone will care (involves navigating away after creating an invalid setting). I’m probably going to go to a single range slider for min/max values in a future version anyway (or triple for min/target/max).

As always, let me know if you discover anything else I’ve missed.

User Interface

User interface overview

There are three primary elements:

  • The GanbarOmeter itself. This gauge helps you decide whether to slow down or speed up doing lessons. Basically, you want to keep the needle within the light green region.

  • The Speed gauge. This gauge tells you how long you are taking on average to answer reading and meaning questions. (Note that this depends on several heuristics and statistical tricks. This information isn’t measured, or at least not
    presented by the Wanikani API.)

  • The Reviews chart which displays a bar chart with how many reviews you performed each day (and the percentage of those reviews you answered correctly the first time).)

If you click the “Data” navigation at the top, the graphs are replaced with a tabular view of the underlying data:

Navigation

The menu on top has several navigational elements:

In addition to showing graphs or data, the nav bar includes:

  • A slider at the top to choose the number of days worth of reviews to retrieve.

  • An icon to pull up the preference setting dialog.

  • (Optional) A launcher for @rfindley’s Self-Study Quiz with just the “new” items in your assignment queue. “New” items are in stages 1 and 2 (“Apprentice 1” and "Apprentice 2’). This icon only appears if you have the Self-Study Quiz installed. By default it will only quiz you on new Kanji, but you can choose whether to include radicals or vocabulary in the settings.

The GanbarOmeter

The first and most important graphical element is the GanbarOmeter itself. The purpose of this gauge is to tell you whether to slow down or speed up doing lessons depending on the counts, types, and SRS stages of upcoming assignments.

The GanbarOmeter display a zero-center needle with three ranges. A numeric value is calculated based on upcoming assignments. That value is compared to lower and upper limits specified by the user. Values between the limits (in the green range) display a “good” label. Below the lower limit (the yellowish range) displays a “more effort needed” label. Above the upper limit (the reddish range) displays a “take a break” label.

The Speed Gauge

The speed gauge show how long on average it takes you to answer an individual reading or meaning question. There is a setting for the target speed, a computed value exactly equal to the target will display the green dial at the 50% location. Faster (lower seconds/question) will display in the lower part of the gauge, and slower in the higher range. Values above or below the limits specified in the target will change the label and display in a warning color.

Note that this displays how much time is spent on each question (reading or meaning, including repeat questions for prior incorrect answers). This is quite different than the delay between individual review records (which is what the API returns).

The Reviews Chart

Finally, the review chart shows a great deal of information:

  • The number of reviews performed each day.
  • The percentage of items answered correctly the first time (both meaning and reading).
  • The target range (the light green box in the background).
  • The expected daily number of reviews based on the makeup of the assignment queue (the horizontal dashed golden line).

If you hover your mouse over an individual reviews bar, it displays the number of review items on that day as well as how many were answered correctly the first time (both reading and meaning).

review-hover

Data view

The data view shows information in tabular form.

Note that the speed gauge displays the number of sessions (consecutive strings of reviews). It uses a statistical algorithm called the median absolute deviation on the intervals between review records to find sessions.

To repeat, the speed table shows question accuracy (the percentage of individual reading or writing questions answered correctly) while the review accuracy table displays item accuracy (the percentage of review items where both the
reading and meaning were answered correctly the first time).

The day range slider

range-slider

You can retrieve between one and seven days of reviews.

Self-study

If @rfindley’s Self-Study Quiz is installed, a hand-drawn icon should be visible to launch it. (I tried to make it look like flash-cards). The idea is to let you do “out-of-band” reviews of newly introduced items as often as possible before doing your “proper” reviews within the SRS system.

Items in the first two stages have very short review-cycles. It doesn’t hurt to review the newest items more frequently. Once you answer them correctly enough times in the real Wanikani SRS system, they’ll move out of the earliest stages.

I feel strongly that these “extra” reviews of items in the first two stages isn’t “cheating”. The whole idea of an SRS is to repeat the stuff that needs it as often as possible, and items in the first two stages absolutely should be reviewed as much as possible.

Once they’ve left the earliest stages, it’s best to let the SRS system figure out when you should next review an item.

Settings

Clicking the right-most icon will bring up the settings dialog:

There are separate sections for each of the different widgets. There are also sections for “advanced” settings you shouldn’t have to touch, as well as appearance settings (including where on the dashboard you want to see the Ganbarometer, and your prefered colors).

Installation

  1. Install a script manager of some sort. Tampermonkey or Violentmonkey should both work (I use Tampermonkey myself).

  2. Install the Wanikani Open Framework.

  3. Navigate to your Wanikani dashboard.

  4. Click on the tampermonkey settings (there should be an icon in your menu bar). If you happen to already have an older version of this script installed, please delete it. Open the tampermonkey dashboard and click the “Utilities” tab.

  5. At the very bottom you will see “Install from URL”. Cut and paste this URL into the box, and click “Install” (you can also just navigate to that link in your browser to install it in TamperMonkey):

https://github.com/wrex/ganbarometer-svelte/raw/main/published/v4/bundle.user.js
  1. Click “Install” on the next page to actually add the script. Then navigate back to your dashboard and refresh the page. You should now see the Ganbarometer!
Other notes

Why this script exists

Developing this has been a lot of fun. I’ve learned a ton (I didn’t know Javascript and barely knew HTML/CSS when I started).

More importantly, though, I’ve found this script incredibly useful as I progress through the levels. Not to put too fine a point on it, I wrote this for myself.

The Wanikani designers have done a truly amazing job. The site teaches you to read kanji in the most efficient AND ALMOST EFFORTLESS way possible. It’s like magic.

The only requirements are that you:

  1. DO ALL YOUR AVAILABLE REVIEWS EVERY SINGLE DAY, and

  2. Do a sufficient number of lessons to maintain a “comfortable” pace while still meeting requirement 1.

In other words, you really must try to get your review queue down to zero at least once every single day. (Life happens, and it’s not the end of the world if you miss a day or two here and there, but you’ll pay for it in the end if you don’t keep up with your reviews.) Doing your reviews every day (or nearly) is non-negotiable.

Lessons though, feed the review queue. The more lessons you do, the more reviews you’ll spend time on. Lessons are the accelerator pedal, reviews are miles under the wheels, and there is no brake pedal! Once you’ve completed a lesson, you’ve launched that item into your review queue for the foreseeable future — there’s no pulling it back. Lessons you do today will have an impact on your review workload months in the future!

You’ve no choice regarding reviews, you’ve got to do them all. But it’s completely up to you to figure out how many lessons to do each day. Lessons are the only thing under your control!

Very smart people who’ve gone before me have figured out various “algorithms” to help figure out when to speed up or slow down with lessons. One common one is to keep your “Apprentice Queue” at about 100 items.

A more sophisticated algorithm is to keep “apprentice items” (stages 1-4) plus 1/10th of “guru” items (stages 5 and 6) around 150 items.

To me, though, newly introduced characters were hardest of all. Each level would start hard and become easier. In the first several days of a new level, all those new kanji really made my reviews difficult. Once I got toward the end of a level and started seeing more vocabulary using those characters in my reviews, it got easier.

I started to slow down doing lessons immediately after leveling-up (when I started seeing more of that “evil pink”) then speed up at the end of a level when new lessons were entirely vocabulary for the kanji I’d already learned.

I wanted a dashboard display that mad this mental calculation visible. Just Tarzan-logic: “new kanji hard — slow down”.

Enter the GanbarOmeter.

How I use this

I do my reviews daily (only very rarely more than once per day). This means that I “miss” the 4-hour and 8-hour review intervals for items in the first two stages. I simply don’t review items in the first two stages often enough.

So, every morning I pour a fresh cup of coffee and look at my dashboard. This only takes a second, just a simple glance.

Unsurprisingly, the default settings for the script match my own preferences.

I want the GanbarOmeter to calculate a weighted value somewhere between 130 and 170 (this is roughly my number of Apprentice items, but early kanji are weighted more heavily and I also count items in the guru stages to account for leeches). Rather than doing mental gymnastics with numbers, I just want to see the needle in the green zone, pointing almost vertical.

The speed dial is mostly to ensure I maintain a consistent pace. I don’t like to do more than 150 or so items in any individual session, and I don’t want to take more than a half-hour to forty-five minutes to do my reviews. A pace of about 6.5 seconds-per-question “feels” about right for me.

The review graph shows me how much work I’ve don’t for the past few days. If my expected number of daily reviews starts creeping up, I may decide to do fewer lessons no matter what my GanbarOmeter says.

Next, I click on the self-study button to review kanji in the first two-stages “out-of-band” (I only review kanji this way, ignoring radicals and vocabulary, because I find them the most difficult). If I don’t know an answer, I type “ke” to answer incorrectly, then hit F1 to reveal the correct answer before moving on.

At the beginning of a level, I might have 10 or more new kanji in stages 1 and 2. At the end of a level I’ll rarely have any.

Regardless, I’ll repeat the self-study quiz until I can answer all the items 100% correctly. Then I’ll hit the escape key three times in a row and start my “real” Wanikani review session.

The Wanikani review session proceeds normally. I’m NOT a fan of re-ordering script or whatever. I’ve no qualms about displaying additional information, but I’m extremely suspicious of anything that actually changes how the Wanikani SRS system actually behaves.

About the only thing I do differently than many during my reviews is to spend time on incorrect answers as I go, trying to figure out why I missed it. Many people wait until the end of their review sessions to figure out why they missed things. It’s very much personal preference.

Only after my review session do I decide whether or not to do any lessons. I navigate back to the dashboard to ensure I have the latest GanbarOmeter value displayed. If it’s in the green (or on the left side) I’ll do at least 5 lessons, if not 10, 15 or even 20 lessons.

In practice, I might think I want to do a large number of lessons because the GanbarOmeter displayed a highly left-of-center value, but after doing 5 lessons I might choose to bail if they seemed harder than usual.

Toward the end of a level, though, the vocabulary lessons often seem easy, so I might choose to do more lessons than the GanbarOmeter might seem to indicate.

In other words, the GanbarOmeter provides input to my decision making process. I don’t just follow it’s guidance automatically.

Development Notes

I developed this using the svelte compiler, using typescript for compile-time type checking, jest as a testing framework, and Testing Library for additional testing semantics. I used Lucas Shanley’s wonderful tampermonkey-svelte template to package up my code as a user script.

It uses two primary widgets: a Gauge.svelte to display a dial gauge, and BarChart.svelte to render a bar chart. Both were hand developed by me using Test Driven Development.

The basic CSS for the dial gauges came from this excellent tutorial by dcode-software. I stole the basic layout of the BarChart from this Codepen by Ion Emil Negoita.

Shout-out to Basar Buyukkahraman’s wonderful course on TDD with Svelte.

The code leverage @rfindley’s wonderful WaniKani Open Framework user script to retrieve and cache results where possible. He and @kumirei from the Wanikani community helped me get started with this user script business!

If you want to help with development or simply want to validate that nothing nefarious is included in the user script:

  1. You’ll need to enable Allow access to file URL's in the Chrome extension for tampermonkey. This is conceivably a security risk, so you may want to disable the setting again after finishing your development work. See tampermonkey-svelte for details.

  2. Download the source from the github repository.

  3. Run npm install to install all the dependencies for compilation.

  4. Before compiling or running the code, you may want to type npm run test. All tests should pass.

  5. In one shell window, type tsc -w to run the typescript compiler in watch mode.

  6. In another shell window, type npm run dev to compile a (un-minified) dev version of the code and prepare for “live” updates.

  7. Copy the top several lines of the file ./dist/bundle.js. Just copy the header itself, everything through and including the // ==/UserScript== line. Don’t copy any actual code.

  8. In the tampermonkey dashboard, click the “+” tab and paste in the headers (again, just the headers) from step 6. Save the file. This will install the ganbarometer-svelte ->dev script and prepare it for “live” updates. If you browse to the WK dashboard, and enable this version of the script, any changes you make to the source code should show up when you refresh the page.

This isn’t what I’d consider professional code: I plan a fair bit of clean-up and refactoring. Please be kind (I’m just an amateur) but any thoughts or comments are quite welcome. Hopefully, it isn’t too hard to figure out the current code organization. It’s definitely FAR better code than the previously published version of the script.

TODO

  • I still need to write more unit tests

  • Doubtless, bugs still lurk. The best way to find them is to put this out there, though. Please let me know if you discover any.

  • I’d like to disable the self-study quiz if there are no no items available. The self-study quiz itself does the right thing if there aren’t any available, but it would be nice to save a click.

  • I’m currently caching the processed data for each widget, but each time the dashboard gets refreshed I pull down 1 to 7 days worth of reviews (and all the assignments) from the Wanikani API. I really want to start caching these raw reviews as well, but I plan to release that as a separate project. I’ll create a new version of the Ganbarometer once that cache is complete.


Old v3.1 stuff

This is v3.1 of the GanbarOmeter user script.

This version of the script contains several ideas that have since been rethought. Please hold off installing this until v4 if you haven’t installed it already. The new version will be published within a week or two and contains many improvements.

I’ve tested this as well as I’m able manually, but without any automated tests there may still be bugs. Please let me know if you find any problems.

This script adds two gauges and a bar chart to your dashboard. After all, what’s a dashboard without gauges?

[If you like this script, you may also be interested in my Burns Progress user script shown at the top of the page in the screenshot.]

The gauges help you decide whether to speed up or slow down doing lessons. If the values displayed remain in the middle of the ranges, you should continue at roughly the same pace. If either turns yellow or red, or even peg at the extreme right of the gauge, you might consider slowing down. Lower values mean you might want to speed up.

The bar graph shows a pseudo-Pareto breakdown of your response time intervals (the delays between successive reviews).

  • Difficulty — A heuristic representing how difficult your upcoming reviews will likely be. It displays on a scale from 0 to 100%, where the middle of the scale (50%) represents “normal difficulty.” Values at the higher end of the scale indicate that you’ll likely find it hard to answer most review items correctly. Values higher than 80% will turn the gauge yellow. Higher than 90% will turn it red.

    The difficulty is mostly based on the number of Apprentice items you currently have under active review, but is also weighted by the percentage of reviews you’ve been answering incorrectly, as well as the number of new kanji in stages 1 and 2.

  • Reviews/day — This displays how much work you’ve been doing on average each day. Unsurprisingly, it displays the number of reviews per day. Note that the script averages the reviews/day across all sessions for the past three days by default. By default, a pace of 150 reviews/day will display the gauge in the middle of its range.

  • Review intervals — This displays your average time between reviews in a session in seconds-per-review, as well as a breakdown of the intervals within various ranges. By default, it displays the counts in 9 different ranges:

    • 0 to 10 seconds
    • 10 to 20 seconds
    • 20 to 30 seconds
    • 30 seconds to 1 minute
    • 1 minute to 1 minute 30 seconds
    • 1 minute 30 seconds to 2 minutes
    • 2 minutes to 5 minutes
    • 5 minutes to 10 minutes
    • greater than 10 minutes

    Note that the Wanikani API does not measure how long it takes to answer a review item. It only tracks the start time of an individual review. These intervals display the start from one review item until the start of another. Since you normally review several items during a single session, the longer intervals (>10’) effectively represent the time between review sessions, while the shorter intervals represent the time between individual reviews.

    The sum of all the counts in all “buckets” equals the total number of reviews you’ve performed over the past 72 hours (by default).

The settings menu provides control over all of the “magic numbers” used in these heuristics, but the defaults should suffice for most users.

NOTE: The Wanikani API can sometimes take a minute or two to return your recent review data. The script displays placeholder gauges and bar graphs until the data is retrieved. The server appears to cache results, however, so subsequent refreshes should happen quite quickly. Not that there is a settings option to display something quickly, immediately after loading the settings, but before loading the review information from the API.

Installation
  1. General script installation instructions

  2. Install the Wanikani Open Framework

  3. Install the GanbarOmeter from Greasy Spoon.

Background

In normal use, the WK SRS behaves as a very complex system. Its behavior depends on several things, primarily:

  1. Whether or not you finish all the reviews that are due on a given day.

  2. How many review items you answer incorrectly in a given session.

  3. The make-up of your “in progress” items: those radicals, kanji, and vocabulary items that have been reviewed at least once, but haven’t yet been burned. This make-up includes:

    • The number of items in earlier (Apprentice) stages. The more of these, the more reviews will be due each day.

    • How many kanji are in the first two stages. Many people find kanji more difficult than radicals and vocabulary, especially when they’ve just been introduced and you don’t have a lot of reviews for the item under your belt. Radicals don’t have readings, and vocabulary often provides additional context, so they tend to be somewhat easier even when in early stages.

  4. The number of lessons you perform each day. Finishing a lesson moves that item into the first stage of the SRS.

Items 1 and 2 are mostly out of your control: You really must try to do all your reviews every day if at all possible, or things can get out of hand quickly. And the percentage of incorrect answers depends on how well your memory is being trained.

Item 3 can only be indirectly controlled.

That leaves just item 4 under your direct control: how quickly you do lessons has the greatest effect on how difficult you’ll find your daily reviews!

The GanbarOmeter attempts to make it easier to know when to speed up or slow down doing lessons.

Difficulty: displayed values and explanation

The Difficulty gauge uses some heuristics to tell you how “difficult” your upcoming reviews are likely to be, based on the stages of items under active review and the percentage of reviews you’ve been answering incorrectly recently.

With the default settings and no weighting factors applied, this gauge will display the needle at the halfway point if you currently have 100 items in Apprentice stages.

The number 100 is somewhat arbitrary and based on personal preference. You may want to adjust the Desired number of apprentice items setting to something other than 100, depending on your comfort level.

Additional weighting is applied for any kanji (not radicals or vocabulary) in stages 1 or 2.

Further weighting is applied if you’ve answered more than 20% (by default) of your daily average number of reviews incorrectly.

You can adjust the weightings with: New kanji weighting factor (default: 0.05), Typical percentage of items missed during reviews (default: 20), and Extra misses weighting (default: 0.03).

A New kanji weighting factor of 0.05 means that 10 kanji items in stages 1 or 2 will be 50% “heavier” than other items in the Apprentice bucket. In other words, each kanji is 5% heavier (0.05).

Similarly, an Extra misses weighting of 0.03 increases the overall weight of your Apprentice items. With the defaults, if you had exactly 100 items in Apprentice stages, with no kanji items in stage 1 or stage 2, and answered fewer than 20 items incorrectly, then the gauge would display in the middle of the range.

Each extra “miss” (incorrectly answered item) beyond 20 items would make the Apprentice queue 3% heavier. If you had missed 24 items, for example, instead of displaying a Difficulty of 50%, it would display 56%:

Display value = (100 apprentice items * 0.03 * 4 extra misses) / 200 items at max scale
              = 112 / 200
              = 0.56
              = 56%
Reviews/day: displayed values and explanation

This is the easiest of the gauges to understand. It simply shows the average number of reviews you are performing per day (24 hours). By default, it averages the past three days (72 hours) worth of results.

The settings variable Running average hours allows you to change the default if you wish. It must be a value between 1 and 24, or a multiple of 24. Note that it may take a long time to retrieve reviews for very large values.

Review intervals: displayed values and explanation

The heading estimates how long on average it takes you to answer a single review item, in units of seconds per review.

Unfortunately, the Wanikani API doesn’t provide this information directly. For valid technical reasons, Wanikani only stores the start time of an individual review.

So the GanbarOmeter first gathers (by default) the past 72 hours of reviews and breaks them into “sessions” based on the following heuristic:

Consecutive reviews that are started within Session interval minutes apart (2 minutes by default) are considered to be in the same session. Any interval longer than this starts a new session.

The total time spent on each session is the difference between the start time of the first review, and the start time of the last review within the session. Unfortunately, the timestamp of the final answer isn’t available, so session minutes are slightly undercounted (this undercounting effect is biggest for very short sessions of only a few reviews).

The average speed value displayed is the sum of the minutes from each session, divided by the total number of items reviewed by all sessions.

The bar graph breaks down all of the intervals between reviews into different “buckets”. If a review occurs within 10 seconds of the immediately preceding review, it will increase that count by 1, for example.

The bucket ranges are for intervals between:

  • 0 to 10 seconds
  • 10 to 20 seconds
  • 20 to 30 seconds
  • 30 seconds to 1 minute
  • 1 minute to 1 minute 30 seconds
  • 1 minute 30 seconds to 2 minutes
  • 2 minutes to 5 minutes
  • 5 minutes to 10 minutes
  • greater than 10 minutes

Intervals to the right of the graph normally indicate delays between sessions, while intervals on the left are between individual reviews.

Caveats

This is a fairly complex script involving several heuristics. I’ve only been using it for a few days, so it’s possible that further tuning will be necessary.

There is also a distinct possibility of a bug or three lurking somewhere. I’m not an experienced Javascript programmer, so the script is unlikely to be terribly performant, reliable, or idiomatic.

Despite all the complexity explained above, the GanbarOmeter is easy to use in practice. It provides a wealth of info in a fairly condensed yet still easy to understand format.

I find it useful, and I hope you do, too!

Please let me know in this thread if you do uncover any issues.

Change log
  • Released 10/18/2021

  • v3.1 lose it if you don’t use it

    • Remove the require for review-cache (hopefully speeding up display of the meters)
  • v3.0 虫を踏む

    • Released 9/26/2021
    • Numerous bug fixes
    • Pace renamed to “Reviews/day”, shows session count, and total reviews
    • Difficulty gauge shows weighted items in bold
    • Settings dialog cleanup (section names)
    • Added settings for pareto buckets
    • Added setting to make immediate loading an option
    • Weighting settings now text boxes (no incrementor) with custom validator
  • v2.0 Two gauges and a chart walk into a bar

    • Released 9/24/2021
    • Uses a “pareto” (-ish) chart to display review-interval breakdown (instead of a gauge
      for average seconds per review).
    • Settings changes no longer require a manual refresh of the page.
    • Displays gauges immediately, then updates values when WK API call returns
    • Custom validator for Interval (must be between 1-24, or a multiple of 24 hours)
    • Fixes bug if less than full day of reviews retrieved (interval < 24
      hours)
    • renamed “load” to “pace”
    • versioning of settings (allows invalidation of stored settings with new
      script versions)
    • layout tweaks and cleanup
  • v1.0 First usable release

    • Released Monday, 9/20/2021.
    • Fixes a silly but major bug with Speed gauge (now interprets Session interval in minutes, not hours).
    • Uses WKOF Settings dialog for user settings (with input validation).
    • Adds more info to debug log including inter-session intervals, and pareto
      of inter-review intervals.
    • Attempts to handle Date objects correctly.
    • Displays misses/day as well as new kanji count
      in difficulty gauge.
  • v0.1 Initial alpha release

    Released around midnight on Friday, 9/17/2021

TODO
  • Add other theming besides the background color
38 Likes

whispers: psst, there’s no link


This tells me I spend forever doing reviews because I get distracted xD

Workload: It’s been one and a half years since I reached level 60, so not much going on for me on wanikani anymore. (561 days on lvl 60 according to Level Duration script)

4 Likes

Looks great!

One thing I noticed:
When you change the settings, the panel doesn’t reload - I have to refresh the page to apply the new settings

Isn’t that the same with all dashboard scripts?
Also, not sure it can be accomplished easily as the scripts are inherently tied to the screen being rendered :thinking: (but curious to learn more!)

3 Likes

I don’t know if this a bug or not but i don’t remember doing a review more than 1 minute let alone 5 :face_with_raised_eyebrow:
Is there something i have to change in the settings?

1 Like

first of all, thanks for making this script! it’s a fun motivator for me to do my lessons.

One thing, may I ask how the speed’s calculated? Because even taking into account the 72-hour average, I get about 12 s/r from the Heatmap script (1), but your speedometer gives me 70 s/r.

(1) 1209 r, 4h9m (pulled from heatmap from last three days), 14940 s/1209 r gives about 12.357 s/r.

Inherently yes. But a custom redraw upon save (WKOF settings allow a callback) can update the panel without having to reload the whole page. Especially if you have multiple scripts running a reload can take a few seconds.

1 Like

I think I saw a setting that mentions the length of time between review sessions. Yep, ‘Session interval’.
Especially for people that review a few items, then handle a phone call before doing a few more reviews, a large session interval can make all that time count as ‘doing a review’.
Try shortening it to 5 or something and see what happens.

That’ll teach me to post a script just before going to bed. [Edit: not to mention replying before I’ve had my first coffee.]

I forgot that the first several people to try out a new script would be level 60s. I think this script is most useful for people between roughly levels 8 and 60, but I’m hopeful that it will eventually be useful for everyone.

Yup, that was my intended behavior which I intended to document in the first post (but cleverly forgot). I will edit the intro to mention this.

I’ll look into @rwesterhof’s suggestion of using a callback to refresh without going into an infinite loop. Currently, I load and update the settings on every refresh.

As described elsewhere, I think this is likely due to the Session interval value being too high (defaults to 10 minutes). Please set it to something like 2.5 minutes and let me know the result.

Also, could you enable debug then refresh and show me what’s printed in your JavaScript console? [Right-click, “inspect”, then click “console” in the developer window that appears.]

I first go through all the recent reviews looking for “sessions” (reviews done one after another without too much time in between each review). Each session tracks how many minutes elapsed during the session.

The reported speed is the sum of all minutes from all sessions divided by 60, divided by the total number of reviews performed across all sessions.

The default maximum between reviews is 10 minute. A lower value might be wiser (say somewhere between 2.5 and 5 minutes). I will almost certainly lower the default in a future version. (Since the default maximum speed is 30 seconds, I’m currently thinking a value of 5X to 10X the maxSpeed might be a reasonable default.)

I think I just corrected a bug in the logic, though: in v0.1, if a session only had a single review, the minutes() for that session would report as zero. This is because the API only reports the start time of a review, not how long it took to submit an answer. So start and end time of a session with only one review will be the same.

I v0.2 and later, I use maxSpeed/2 as an estimate for very short sessions (which I think can only happen with one review).

It appears both scripts agree on the number of reviews, so the difference must be in the number of seconds it took to perform those reviews. Heatmap is using a value of 14,940 seconds as you point out. GanbarOmeter must be using a value of around 84,630 seconds (23.5 hours of review time over 3 days, which makes no sense).

I’d really like to see your console output with debug enabled: The minutes() value returned for one of the sessions must be way off for some reason.

Also: 417 reviews/day over three days!! Yow. I’d be whimpering for sure. :grin:

The defaults are definitely aimed at slow-and-steady users like myself. Speed-runners will likely want to bump up some of the values. In your case, you may want to set:

  • Desired apprentice quantity to 300 or higher so the gauge doesn’t peg. You’ve currently got 296 apprentice items. The goal is to have the gauge show roughly 50% (needle straight up) for the desired difficulty level. With the defaults, someone with exactly 100 apprentice items, answering fewer than 20% of reviews incorrectly, and with no kanji in Apprentice1 or Apprentice2 would cause the gauge to display exactly 50%. With 296 apprentice items, you’re already pegging the meter without accounting for misses and new kanji.

  • Maximum reviews per day to 800 or so so. With the defaults, the gauge will show 50% at 150 reviews/day.

Can you enable debug, refresh, and send me your console output?

I should leave debug on until it’s had a bit more usage. New version coming up.

I need to look at the code rather than my phone. I’m pretty sure I have another bug in my speed calculation logic.

Version 0.5 is now posted.

Changes:

  • debug enabled by default. If you had installed an earlier version, please go into settings → GanbarOmeter, click the checkbox for debug, save your settings, and refresh your browser window.

  • Sessions with Session.minutes() no longer returns zero for sessions with only a single review. The minimum session time is now maxSpeed/2 seconds.

  • Changed debug log output slightly: delimits the start and end of GanbarOmeter logs, and includes the current settings.

  • Displays the number of misses per day in the Display gauge (as well as the number of new kanji)


Only a complete idiot would copy then forget to paste the link target! :roll_eyes:

1 Like

Love it!

1 Like

my bad, I was asleep! I’m not sure what’s happened, but I’ve updated to v.0.5 and now the whole bar won’t even load; it gets stuck loading reviews, apparently (it’s been like this for quite a while)

Screen Shot 2021-09-20 at 07.14.15

And the console output (I think)

I’m not sure if this is really helpful because the script doesn’t seem to actually load, though.

Update: I’ve tried downgradiing to v0.4 and v0.1. It’s still stuck.

What other scripts do you have loaded?

Do me a favor and try disabling every other script except Wanikani Open Framework and Ganbarometer, then quit and restart the browser (not just refresh).

I’m wondering if there is a weird interaction with another script (still my problem if so, but trying to debug).

The only other hypothesis is if the start date for requesting reviews is being calculated incorrectly. At level 60 you can have a LOT of reviews (the script attempts to retrieve just the last 72 hours worth).

Let me know if none of the above works and I can send you a one off script that just logs befor attempting to retrieve the reviews.

1 Like

restarting & isolating seems to have done the trick (restarting probably did the trick, I’d already tried refreshing and isolating). It seems to still load quite slowly alone compared to other scripts, oddly.

1331 reviews in 72 hours
104.7 misses per day
1497 total minutes
3 sessions: 
     - Start: Fri Sep 17 2021 12:05:21 GMT+0700 (Indochina Time)
       End: Fri Sep 17 2021 16:52:35 GMT+0700 (Indochina Time)
       Misses: 110
       Reviews: 441
       Review minutes: 287 
     - Start: Sat Sep 18 2021 11:41:19 GMT+0700 (Indochina Time)
       End: Sat Sep 18 2021 22:01:29 GMT+0700 (Indochina Time)
       Misses: 92
       Reviews: 391
       Review minutes: 620 
     - Start: Sun Sep 19 2021 09:32:33 GMT+0700 (Indochina Time)
       End: Sun Sep 19 2021 19:22:51 GMT+0700 (Indochina Time)
       Misses: 112
       Reviews: 499
       Review minutes: 590 
296 apprentice 2 newKanji 
444 reviews per day (0 - 300 
67 seconds per review (0 - 30) 
Difficulty: 1 (0-1) 
Load: 1 
Speed: 1 

although, just glancing at the output data, I’m pretty sure I’ve never had a review session longer than 3h, let alone 10h :stuck_out_tongue:

set to 2.5 and disable all scripts except wanikani open framework

i guess i’m a little bit confused, does “review minutes” means time spent on a single item or a session, because based on the data i think it means time spent on a review session

Yeah, unfortunately that’s unavoidable. It takes a few seconds for the API to return the 1331 review items, so the gauges might not appear for a while.

I’ll try to improve a future version to render SOMETHING quickly (a “loading” message of some sort, then update the gauges once the data is retrieved from the API.

Each session comprises one or more reviews, each with a timestamp of when the review was started. The “review minutes” reported in the debug output is the difference between the timestamps of the first and last review within the session. This isn’t exactly accurate, because the timestamps are when the review of that item started — I’d really like to have the timestamp when the last review ended, but that data doesn’t exist.

The second to last session shows, for example, that you reviewed 100 items and answered all but 3 of those items correctly. The first item in that session was reviewed at 18:14 local time, and the last item was reviewed at 23:11, a span of 296 minutes.

Does that sound correct, or is the script calculating something wrong?

EDIT: Wait, clearly something isn’t being calculated correctly. With a max session interval of 2.5 minutes and 100 items, the longest possible session time would be 250 minutes. This info should help me track down the problem.

Hmm. Let me think about this and look at the code. It appears I may have a bug in the logic that finds sessions.

Were you doing reviews at both 9 am and 7pm on Sunday? The last session entry shows those as the beginning and ending timestamps. I suspect that this should be counted as multiple sessions.

Please upgrade to v1.5 which reports the settings now that it’s working again. Was this output using a Session interval of 10 minutes? Is there any chance you did at least one review every 10 minutes from 9 am to 7pm Sunday?

Bear with me while I figure this out…

1 Like

I think so, yes.

Yup.

I don’t think so, given that I was in classes from 10am to 3pm…

v0.5? :slight_smile: I updated and restarted Firefox, and it’s been loading for quite some time now… (something like 10m? or maybe 13.68m)

update: 15m

I think the issue involves date calculations with different timezones. It could be affecting the review load as well.

Please disable for a bit, and I’ll post a new version shortly as soon as I figure out what I’m doing wrong with Date objects.

Thanks for reporting!

1 Like