[Userscript] The GanbarOmeter

Speaking for myself, if I can’t recall something within about 10 seconds I just fail it (there’s a script for that). However, some people do take up to a minute for one item sometimes, especially if it’s A1/A2. I don’t think 6.5 minutes should ever be spent on a single item though! :slight_smile:

Although in that case I think I just got distracted by some song I was listening to.

1 Like

Maybe that’s because the session interval setting is actually in hours and not minutes? So by default, if there is a break of less than 10 hours between reviews, they are considered to be in the same session.

return timeDifference <= maxMinutes * 1000 * 60 * 60;
1 Like

yeah, i thought this could be it too, looking at the log and seeing how none of them were less than a few minutes apart. it would make sense, but I’m not the script author so idk. Presumably 0.1 hours would be 6 minutes.

Edit: I’ve lowered the session interval parameter one order of magnitude down, to 0.01 (presumably 0.6m, or 36s). It’s brought my stats to 8 s/r with 54 sessions, which… :thinking:

Anyhow I’ve put the debug output down there in case anyone’s interested.

------ GanbarOmeter debug output ------
settings:
  - interval: 72
  - sessionIntervalMax: 0.01
  - normalApprenticeQty: 100
  - newKanjiWeighting: 0.05
  - normalMisses: 0.2
  - extraMissesWeighting: 0.03
  - maxLoad: 300
  - maxSpeed: 30
  - backgroundColor: #f4f4f4
1169 reviews in 72 hours
82.7 misses per day
161 total minutes
54 sessions: Ganbarometer.user.js:452:13
     - Start: Sat Sep 18 2021 11:41:19 GMT+0700 (Indochina Time)
       End: Sat Sep 18 2021 11:43:30 GMT+0700 (Indochina Time)
       Misses: 4
       Reviews: 14
       Review minutes: 2 Ganbarometer.user.js:470:15
     - Start: Sat Sep 18 2021 11:44:17 GMT+0700 (Indochina Time)
       End: Sat Sep 18 2021 11:44:38 GMT+0700 (Indochina Time)
       Misses: 0
       Reviews: 3
       Review minutes: 0 Ganbarometer.user.js:470:15
     - Start: Sat Sep 18 2021 11:45:53 GMT+0700 (Indochina Time)
       End: Sat Sep 18 2021 11:55:51 GMT+0700 (Indochina Time)
       Misses: 11
       Reviews: 58
       Review minutes: 10 Ganbarometer.user.js:470:15
     - Start: Sat Sep 18 2021 11:56:28 GMT+0700 (Indochina Time)
       End: Sat Sep 18 2021 12:03:34 GMT+0700 (Indochina Time)
       Misses: 22
       Reviews: 51
       Review minutes: 7 Ganbarometer.user.js:470:15
     - Start: Sat Sep 18 2021 12:08:11 GMT+0700 (Indochina Time)
       End: Sat Sep 18 2021 12:10:27 GMT+0700 (Indochina Time)
       Misses: 5
       Reviews: 14
       Review minutes: 2 Ganbarometer.user.js:470:15
     - Start: Sat Sep 18 2021 14:36:02 GMT+0700 (Indochina Time)
       End: Sat Sep 18 2021 14:48:47 GMT+0700 (Indochina Time)
       Misses: 19
       Reviews: 90
       Review minutes: 13 Ganbarometer.user.js:470:15
     - Start: Sat Sep 18 2021 15:08:12 GMT+0700 (Indochina Time)
       End: Sat Sep 18 2021 15:15:14 GMT+0700 (Indochina Time)
       Misses: 2
       Reviews: 54
       Review minutes: 7 Ganbarometer.user.js:470:15
     - Start: Sat Sep 18 2021 16:33:25 GMT+0700 (Indochina Time)
       End: Sat Sep 18 2021 16:41:17 GMT+0700 (Indochina Time)
       Misses: 10
       Reviews: 46
       Review minutes: 8 Ganbarometer.user.js:470:15
     - Start: Sat Sep 18 2021 21:54:33 GMT+0700 (Indochina Time)
       End: Sat Sep 18 2021 22:01:29 GMT+0700 (Indochina Time)
       Misses: 17
       Reviews: 61
       Review minutes: 7 Ganbarometer.user.js:470:15
     - Start: Sun Sep 19 2021 09:32:33 GMT+0700 (Indochina Time)
       End: Sun Sep 19 2021 09:34:05 GMT+0700 (Indochina Time)
       Misses: 1
       Reviews: 11
       Review minutes: 2 Ganbarometer.user.js:470:15
     - Start: Sun Sep 19 2021 09:36:09 GMT+0700 (Indochina Time)
       End: Sun Sep 19 2021 09:36:51 GMT+0700 (Indochina Time)
       Misses: 0
       Reviews: 7
       Review minutes: 1 Ganbarometer.user.js:470:15
     - Start: Sun Sep 19 2021 09:37:53 GMT+0700 (Indochina Time)
       End: Sun Sep 19 2021 09:38:05 GMT+0700 (Indochina Time)
       Misses: 0
       Reviews: 3
       Review minutes: 0 Ganbarometer.user.js:470:15
     - Start: Sun Sep 19 2021 09:38:46 GMT+0700 (Indochina Time)
       End: Sun Sep 19 2021 09:41:20 GMT+0700 (Indochina Time)
       Misses: 2
       Reviews: 18
       Review minutes: 3 Ganbarometer.user.js:470:15
     - Start: Sun Sep 19 2021 09:44:01 GMT+0700 (Indochina Time)
       End: Sun Sep 19 2021 09:44:58 GMT+0700 (Indochina Time)
       Misses: 3
       Reviews: 6
       Review minutes: 1 Ganbarometer.user.js:470:15
     - Start: Sun Sep 19 2021 09:46:22 GMT+0700 (Indochina Time)
       End: Sun Sep 19 2021 09:48:13 GMT+0700 (Indochina Time)
       Misses: 3
       Reviews: 12
       Review minutes: 2 Ganbarometer.user.js:470:15
     - Start: Sun Sep 19 2021 16:20:01 GMT+0700 (Indochina Time)
       End: Sun Sep 19 2021 16:25:56 GMT+0700 (Indochina Time)
       Misses: 7
       Reviews: 41
       Review minutes: 6 Ganbarometer.user.js:470:15
     - Start: Sun Sep 19 2021 16:30:00 GMT+0700 (Indochina Time)
       End: Sun Sep 19 2021 16:45:56 GMT+0700 (Indochina Time)
       Misses: 21
       Reviews: 100
       Review minutes: 16 Ganbarometer.user.js:470:15
     - Start: Sun Sep 19 2021 17:13:44 GMT+0700 (Indochina Time)
       End: Sun Sep 19 2021 17:13:54 GMT+0700 (Indochina Time)
       Misses: 0
       Reviews: 2
       Review minutes: 0 Ganbarometer.user.js:470:15
     - Start: Sun Sep 19 2021 17:32:52 GMT+0700 (Indochina Time)
       End: Sun Sep 19 2021 17:33:41 GMT+0700 (Indochina Time)
       Misses: 3
       Reviews: 10
       Review minutes: 1 Ganbarometer.user.js:470:15
     - Start: Sun Sep 19 2021 17:36:08 GMT+0700 (Indochina Time)
       End: Sun Sep 19 2021 17:36:34 GMT+0700 (Indochina Time)
       Misses: 1
       Reviews: 4
       Review minutes: 0 Ganbarometer.user.js:470:15
     - Start: Sun Sep 19 2021 17:37:19 GMT+0700 (Indochina Time)
       End: Sun Sep 19 2021 17:37:19 GMT+0700 (Indochina Time)
       Misses: 0
       Reviews: 1
       Review minutes: 0 Ganbarometer.user.js:470:15
     - Start: Sun Sep 19 2021 18:07:59 GMT+0700 (Indochina Time)
       End: Sun Sep 19 2021 18:13:02 GMT+0700 (Indochina Time)
       Misses: 13
       Reviews: 41
       Review minutes: 5 Ganbarometer.user.js:470:15
     - Start: Sun Sep 19 2021 18:21:52 GMT+0700 (Indochina Time)
       End: Sun Sep 19 2021 18:29:32 GMT+0700 (Indochina Time)
       Misses: 8
       Reviews: 59
       Review minutes: 8 Ganbarometer.user.js:470:15
     - Start: Sun Sep 19 2021 18:30:08 GMT+0700 (Indochina Time)
       End: Sun Sep 19 2021 18:32:06 GMT+0700 (Indochina Time)
       Misses: 4
       Reviews: 14
       Review minutes: 2 Ganbarometer.user.js:470:15
     - Start: Sun Sep 19 2021 18:36:54 GMT+0700 (Indochina Time)
       End: Sun Sep 19 2021 18:42:43 GMT+0700 (Indochina Time)
       Misses: 8
       Reviews: 48
       Review minutes: 6 Ganbarometer.user.js:470:15
     - Start: Sun Sep 19 2021 19:01:48 GMT+0700 (Indochina Time)
       End: Sun Sep 19 2021 19:02:05 GMT+0700 (Indochina Time)
       Misses: 0
       Reviews: 3
       Review minutes: 0 Ganbarometer.user.js:470:15
     - Start: Sun Sep 19 2021 19:03:21 GMT+0700 (Indochina Time)
       End: Sun Sep 19 2021 19:05:44 GMT+0700 (Indochina Time)
       Misses: 5
       Reviews: 20
       Review minutes: 2 Ganbarometer.user.js:470:15
     - Start: Sun Sep 19 2021 19:07:01 GMT+0700 (Indochina Time)
       End: Sun Sep 19 2021 19:22:51 GMT+0700 (Indochina Time)
       Misses: 30
       Reviews: 99
       Review minutes: 16 Ganbarometer.user.js:470:15
     - Start: Mon Sep 20 2021 12:36:14 GMT+0700 (Indochina Time)
       End: Mon Sep 20 2021 12:37:35 GMT+0700 (Indochina Time)
       Misses: 1
       Reviews: 13
       Review minutes: 1 Ganbarometer.user.js:470:15
     - Start: Mon Sep 20 2021 12:40:23 GMT+0700 (Indochina Time)
       End: Mon Sep 20 2021 12:41:26 GMT+0700 (Indochina Time)
       Misses: 0
       Reviews: 10
       Review minutes: 1 Ganbarometer.user.js:470:15
     - Start: Mon Sep 20 2021 12:42:35 GMT+0700 (Indochina Time)
       End: Mon Sep 20 2021 12:49:14 GMT+0700 (Indochina Time)
       Misses: 12
       Reviews: 46
       Review minutes: 7 Ganbarometer.user.js:470:15
     - Start: Mon Sep 20 2021 12:49:53 GMT+0700 (Indochina Time)
       End: Mon Sep 20 2021 12:49:59 GMT+0700 (Indochina Time)
       Misses: 0
       Reviews: 2
       Review minutes: 0 Ganbarometer.user.js:470:15
     - Start: Mon Sep 20 2021 15:11:00 GMT+0700 (Indochina Time)
       End: Mon Sep 20 2021 15:11:17 GMT+0700 (Indochina Time)
       Misses: 0
       Reviews: 4
       Review minutes: 0 Ganbarometer.user.js:470:15
     - Start: Mon Sep 20 2021 15:12:38 GMT+0700 (Indochina Time)
       End: Mon Sep 20 2021 15:13:38 GMT+0700 (Indochina Time)
       Misses: 1
       Reviews: 9
       Review minutes: 1 Ganbarometer.user.js:470:15
     - Start: Mon Sep 20 2021 15:18:58 GMT+0700 (Indochina Time)
       End: Mon Sep 20 2021 15:19:29 GMT+0700 (Indochina Time)
       Misses: 2
       Reviews: 5
       Review minutes: 1 Ganbarometer.user.js:470:15
     - Start: Mon Sep 20 2021 15:24:13 GMT+0700 (Indochina Time)
       End: Mon Sep 20 2021 15:24:19 GMT+0700 (Indochina Time)
       Misses: 0
       Reviews: 2
       Review minutes: 0 Ganbarometer.user.js:470:15
     - Start: Mon Sep 20 2021 15:25:48 GMT+0700 (Indochina Time)
       End: Mon Sep 20 2021 15:25:48 GMT+0700 (Indochina Time)
       Misses: 0
       Reviews: 1
       Review minutes: 0 Ganbarometer.user.js:470:15
     - Start: Mon Sep 20 2021 15:26:46 GMT+0700 (Indochina Time)
       End: Mon Sep 20 2021 15:26:46 GMT+0700 (Indochina Time)
       Misses: 0
       Reviews: 1
       Review minutes: 0 Ganbarometer.user.js:470:15
     - Start: Mon Sep 20 2021 15:48:52 GMT+0700 (Indochina Time)
       End: Mon Sep 20 2021 15:49:16 GMT+0700 (Indochina Time)
       Misses: 0
       Reviews: 5
       Review minutes: 0 Ganbarometer.user.js:470:15
     - Start: Mon Sep 20 2021 15:50:33 GMT+0700 (Indochina Time)
       End: Mon Sep 20 2021 15:50:38 GMT+0700 (Indochina Time)
       Misses: 0
       Reviews: 2
       Review minutes: 0 Ganbarometer.user.js:470:15
     - Start: Mon Sep 20 2021 15:51:36 GMT+0700 (Indochina Time)
       End: Mon Sep 20 2021 15:52:14 GMT+0700 (Indochina Time)
       Misses: 0
       Reviews: 5
       Review minutes: 1 Ganbarometer.user.js:470:15
     - Start: Mon Sep 20 2021 15:54:13 GMT+0700 (Indochina Time)
       End: Mon Sep 20 2021 15:54:38 GMT+0700 (Indochina Time)
       Misses: 1
       Reviews: 4
       Review minutes: 0 Ganbarometer.user.js:470:15
     - Start: Mon Sep 20 2021 15:56:09 GMT+0700 (Indochina Time)
       End: Mon Sep 20 2021 15:56:48 GMT+0700 (Indochina Time)
       Misses: 1
       Reviews: 6
       Review minutes: 1 Ganbarometer.user.js:470:15
     - Start: Mon Sep 20 2021 15:58:13 GMT+0700 (Indochina Time)
       End: Mon Sep 20 2021 15:58:20 GMT+0700 (Indochina Time)
       Misses: 0
       Reviews: 2
       Review minutes: 0 Ganbarometer.user.js:470:15
     - Start: Mon Sep 20 2021 15:59:50 GMT+0700 (Indochina Time)
       End: Mon Sep 20 2021 16:00:02 GMT+0700 (Indochina Time)
       Misses: 0
       Reviews: 3
       Review minutes: 0 Ganbarometer.user.js:470:15
     - Start: Mon Sep 20 2021 16:01:22 GMT+0700 (Indochina Time)
       End: Mon Sep 20 2021 16:01:22 GMT+0700 (Indochina Time)
       Misses: 0
       Reviews: 1
       Review minutes: 0 Ganbarometer.user.js:470:15
     - Start: Mon Sep 20 2021 16:02:25 GMT+0700 (Indochina Time)
       End: Mon Sep 20 2021 16:09:06 GMT+0700 (Indochina Time)
       Misses: 10
       Reviews: 51
       Review minutes: 7 Ganbarometer.user.js:470:15
     - Start: Mon Sep 20 2021 16:12:13 GMT+0700 (Indochina Time)
       End: Mon Sep 20 2021 16:17:16 GMT+0700 (Indochina Time)
       Misses: 10
       Reviews: 32
       Review minutes: 5 Ganbarometer.user.js:470:15
     - Start: Mon Sep 20 2021 16:24:09 GMT+0700 (Indochina Time)
       End: Mon Sep 20 2021 16:25:11 GMT+0700 (Indochina Time)
       Misses: 1
       Reviews: 4
       Review minutes: 1 Ganbarometer.user.js:470:15
     - Start: Mon Sep 20 2021 16:26:16 GMT+0700 (Indochina Time)
       End: Mon Sep 20 2021 16:26:22 GMT+0700 (Indochina Time)
       Misses: 0
       Reviews: 2
       Review minutes: 0 Ganbarometer.user.js:470:15
     - Start: Mon Sep 20 2021 16:28:00 GMT+0700 (Indochina Time)
       End: Mon Sep 20 2021 16:28:41 GMT+0700 (Indochina Time)
       Misses: 2
       Reviews: 5
       Review minutes: 1 Ganbarometer.user.js:470:15
     - Start: Mon Sep 20 2021 16:29:17 GMT+0700 (Indochina Time)
       End: Mon Sep 20 2021 16:29:59 GMT+0700 (Indochina Time)
       Misses: 1
       Reviews: 6
       Review minutes: 1 Ganbarometer.user.js:470:15
     - Start: Mon Sep 20 2021 16:30:38 GMT+0700 (Indochina Time)
       End: Mon Sep 20 2021 16:31:42 GMT+0700 (Indochina Time)
       Misses: 3
       Reviews: 8
       Review minutes: 1 Ganbarometer.user.js:470:15
     - Start: Mon Sep 20 2021 16:33:38 GMT+0700 (Indochina Time)
       End: Mon Sep 20 2021 16:38:44 GMT+0700 (Indochina Time)
       Misses: 4
       Reviews: 50
       Review minutes: 5 Ganbarometer.user.js:470:15
224 apprentice 1 newKanji Ganbarometer.user.js:478:13
390 reviews per day (0 - 300 Ganbarometer.user.js:481:13
8 seconds per review (0 - 30) Ganbarometer.user.js:484:13
Difficulty: 1 (0-1) Ganbarometer.user.js:489:13
Load: 1 Ganbarometer.user.js:490:13
Speed: 0.26666666666666666 Ganbarometer.user.js:491:13
------ End GanbarOmeter ------

Edit x2: okay, so from what I understand, the ‘session interval’ parameter defines what the script thinks is the amount of time between two reviews it needs for it to count them in separate sessions. If it’s 36s, it makes sense that there would be a lot more sessions; perhaps a more reasonable amount, accounting for distractions and pondering, would be something alone the lines of 1m.

Lol!

Lack of sleep blinded me.

Update coming shortly.

EDIT: It tickles me that the bug was LITERALLY on the very last line of the code, too. :yawning_face:

EDIT 2: The calculation is as easy as spelling “banana” — you just need to know when to stop!

1 Like

[Version 0.7] has now been posted to Greasy Spoon. Please update at your earliest convenience.

This version fixes the silly bug where Session interval was in hours rather than minutes. Please reset session interval to a value between 1 and 10 after updating.

Speed should now be calculated correctly for everyone.

My plans prior to releasing v1.0 (edited: added items and sorted in priority order):

  1. Add some validation for the settings
  2. Display a pareto of actual session intervals (min/max with a session at least) in debug log
  3. Add a callback to refresh the gauges when settings change
  4. (Optional) use review_cache if present, otherwise fetch_endpoint() from WKOF
  5. (Optional) Display how many sessions occurred per day in debug log
  6. (Optional) create UI element to display 2 and 5.

Thanks for your patience, everyone!

To make it easy for yourself I would call the rendering function when the settings dialogue is closed, so that it can just create a new element and replace the old one

2 Likes

I just published v1.0:

  • Specifies valid ranges for each setting
  • Debug log improvements
    • log delay in minutes from one session to another
    • show a pareto of the delays between every individual review item.

The pareto is extremely useful to find a good value for your Session interval setting. My debug output looks like this:

Not implemented (but one can hope):

  • Auto refresh on settings changes
  • UI element to show sessions for the review history
  • UI element to show the pareto of review-to-review intervals
  • World peace

This version should work well for everyone now, but please let me know if there are any further issues.

3 Likes

Guess what! I’ve got a fever and the only prescription is …

Well, more Ganbarometer, anyway.

I’ve got most of my to-do’s knocked out. But I’ve been looking hard at that Speed gauge.

I’ve got a development version that swapped out the gauge for a pareto chart of response times (instead of displaying just one average):

The pareto chart has 9 buckets: three at 10 second granularity, three at 1 minute granularity, and 3 at 5 minute granularity.

The chart above means that over the past 72 reviews, I’ve gone to the next review item within 10 seconds exactly 201 times. I answered between 10 seconds and 20 seconds 72 times. And so on. The final bucket contains all items that took longer than 10 minutes.

Remember that WK only logs when you start a review, not when you reply. So every time you go for coffee or quit for the day, there is no way for the Ganbarometer to know so we have to estimate based on the start time between consecutive reviews.

Really long intervals mean you took a break between sessions. Defining “really long” precisely can be difficult, though. In my case, it’s pretty clear that intervals less than 2 minutes are all within the same session, and the three intervals on the right end of the chart indicate the starts of new sessions (so four sessions in all: the initial one, then three more separated by the three long intervals).

Anyway, I find this more useful, but before I release it I wanted to get feedback.

Should I display a pareto chart instead of a simple gauge for the average?

1 Like

This looks really nice! Would it be possible to customize the buckets to shorter intervals? Personally, I almost never let a review sit for more than 30s, and rarely more than 20s; likewise others may prefer to take their time, so one setup wouldn’t be universally applicable.

Sure, I’ll add the bucket configuration to Settings.

Note that I thought I almost never took longer than 30 seconds to answer, too, but remember that this isn’t really measuring the time to answer: it’s measuring the interval until the next review starts. With misses especially, I often take time to read about the correct answer before going on to the next review.

My answers are mostly “typing speed” (and some take longer to type than others) or “some pondering required”. And when I miss, I almost always at least look at the correct answer which takes some time.

Anyway, I think the current buckets fit your needs already. The buckets are:

  • 0 to 10s
  • 10s to 20s
  • 20s to 30s
  • 30s to 1m
  • 1’ to 1’30"
  • 1’30" to 2 minutes
  • 2 minutes to 5 minutes
  • 5 minutes to 10 minutes
  • > 10 minutes

The last bucket will always be the catch-all.

[The dev version is at https://github.com/wrex/ganbarometer/raw/main/ganbarometer.user.js if anyone wants to play with it – but know it desperately needs cleanup/refactoring and hasn’t been tested much at all.]

1 Like

ah, that’s fair. I didn’t think about it that carefully. But I mostly just glance at the answer and retype (or use 10ten) because I’m generally past the point of new items, so it’s mostly just refreshers now. Thus even including the reading/skimming portion it’s unlikely that my wrong items will have a time of more than 20s, and probably around 10s. Personally I think the best intervals for me would be 0-10-20-30-40-50-∞, just to keep it simple, because there’s not much point in having nearly empty buckets when you could just lump those discrepancies to the rightmost column.

Makes sense. I forgot: level 60. :grin:

I think the defaults should work well for most, but I’ll add it to the user settings.

GanbarOmeter v2.0 is now available.

  • v2.0 Two gauges and a chart walk into a bar

    • Released 9/24/2021
    • Uses a “pareto” (-ish) chart to display review-interval breakdown (instead of a gauge
      for average seconds per review).
    • Settings changes no longer require a manual refresh of the page.
    • Displays gauges immediately, then updates values when WK API call returns
    • Custom validator for Interval (must be between 1-24, or a multiple of 24 hours)
    • Fixes bug if less than full day of reviews retrieved (interval < 24
      hours)
    • renamed “load” to “pace”
    • versioning of settings (allows invalidation of stored settings with new
      script versions)
    • layout tweaks and cleanup

Enjoy!

2 Likes

I seem to be having some issues with this new version.

The new charts are displaying fine, but I am not getting the new settings at all, and changes I make in the old settings are not being applied and causing further issues like…

  • Every time I change a setting and save, the values of each bar in the bar graph increase by an increment equal to the starting value (that is, if I have 79 in the first bucket, each subsequent instance of a save operation on the settings adds this initial value to the same bucket again).
  • Changes to the settings related to the weighting of misses do not change the difficulty; only changes made to the desired apprentice quantity and the weighting of the kanji difficulty change the difficulty gauge.
  • Changing the setting for maximum number of seconds per review seems to have no effect other than the issue of initial values adding to buckets for the bar graph.
  • Changing the background color does not take effect.
  • The up and down arrows on the input fields for the weight factors immediately set the field to 0 or 1 and trigger the validation warning. If it is possible, a default increment value for these fields other than an integer would be great, else I think the fields should not have these arrows.

I suspect the issues around changes not taking effect are related to me not getting whatever the new settings dialog is meant to be. The issue with the bar graph might be related to the refresh without need of browser refresh (doing a browser refresh resets the values of the bar graph to their initial values).

I have uninstalled and reinstalled the script but the issues persisted.

Something else that I might look into adding to the copy of the script installed for me is an artificial delay so the element loads in after Ultimate Timeline, as I preferred it to be below the timeline and currently it loads fast enough to be above the timeline.

With all that said, thank you for all the time and effort you’ve put into this! I really enjoy having some visual metrics to see on my dashboard and I appreciate the effort to make them meaningful values.

Hmm… definitely a bug in the code.

Let me take a look. I may need to have to make a debug version for you to try.

That definitely sounds like a bug in my code, but I may need some help to replicate the problem. Let me first try to see if I can find the problem just with analysis, but if I need your help to replicate, please let me know a time in your local timezone that would be convenient to debug.

For the time being, you can at least get back to the default behavior (and provide me with some debug info) with the following steps:

  1. On the Wanikani dashboard page, right click and choose “Inspect” to open the javascript debugger (this should work on Chrome, Firefox, or Safari, at least).

  2. Choose the “console” in the debugger to access the Javascript console.

  3. In the console, type wkof.settings.ganbarometer and hit return.

  4. Then twirl open the disclosure triangle next to the output. If you could take a screenshot, it would help me to debug. It should look something like this:

  1. Now to reset back to the defaults, type wkof.settings.ganbarometer = {} and hit enter.

  2. Finally, type wkof.Settings.save('ganbarometer') hit enter, then close the Javascript debugger and refresh the page.

Hopefully that will get you able to use the script again.

There are two settings related to weighting misses: Typical percentage of items missed during reviews (not the best name) and Extra misses weighting. The first allows you to ignore a certain percentage of misses (nobody should expect to be perfect). For example, my last refresh showed a pace of 156 reviews per day and I’m using a default of 20% “typical” misses, so I can miss up to 156 * 0.20 = 31.2 misses/day on average without it causing any extra weighting. The Extra misses weighting factor only applies to extra misses above 31.2 per day.

Is this different than how it is behaving for you? The value shown below the difficulty gauge is the average misses/day, not the extra misses per day (though perhaps that might be more reasonable to show).

That would be because I forgot to remove this setting after changing the speed graph to a bar chart! It no longer has any meaning. I’ll remove that in v2.1 shortly.

Hmm. That’s probably related to your main issue of “not getting the new settings at all”. Let me add some additional logging in v2.1 as well.

This uses the Wanikani Open Framework dialog. I don’t think I can easily change the increments, so I may change all of these to text input fields rather than number input fields (which should eliminate the arrows). Let me take a look.

I need to think about how best to accomplish this. The Ganbarometer, the Burn Progress script, and the Ultimate Timeline all use code to insert their sections immediately before the “progress and forecast” section, so the order that these scripts execute affects the order on the screen. Even re-ordering the scripts in Tampermonkey doesn’t guarantee which code executes first as all these scripts retrieve info asynchronously. Right now Ganbarometer always wins the race because it adds the (temporary) section synchronously before retrieving any data.

I don’t want to give up having the script display SOMETHING immediately, then populate once the data is asynchronously retrieved. I find that behavior preferable to having everything shift on the page after a random delay.

I suspect the only solution may be yet another Settings variable, which I would like to avoid.

Let me think on it for a bit.

Thank you! I’m pretty happy with this one myself. I think the pareto is a big win over just showing the average speed. I like having more information.

Bear with me as I work out the kinks.

If it helps any, I quickly added a line to the console output to check the version attribute of the settingsConfig, and it does give me the correct string (“settings-v2.0”). It’s just that changes I make to some settings aren’t being applied (for example, in the settings dialog, I have the color #408080 selected as a test color, and the console output indicates this as the value of the backgroundColor attribute; but the actual <div>s do not have this color applied; that is, the CSS was not updated).

I’m not well versed in JS, so idk if I’m on any track here let alone the right one, but it seems that populateGbSection() does not have anything in it that would update the inline style element with changes. The other issues I don’t have any guesses about at the moment though.

{
  backgroundColor: "#408080"
  debug: true
  extraMissesWeighting: 0.03
  interval: 96
  maxLoad: 120
  maxPace: 300
  maxSpeed: 60
  newKanjiWeighting: 0.05
  normalApprenticeQty: 60
  normalMisses: 10
  sessionIntervalMax: 10
  version: "settings-v2.0"
}

Looking at this, and reading your more detailed explanations, I think half of my issues stem from not fully understanding the calculation of the metrics. I’ll do more fine testing with the new understanding to see if it is behaving as expected tomorrow. With that said, the background color is definitely an issue, and removal of those settings no longer needed for the Pareto will fix a lot too.

Yeah, after looking at the code a bit I realized that the dialog was from WKOF and that you probably can’t control something like that. It’s okay, all it takes is trying to use them once to learn not to use them again, and I’m sure the fields returning a number data type instead of text makes things easier for you.

Oh please don’t feel like you have to do anything for this, especially if it would mean adding in more settings when not entirely necessary. I think with a little internet searching I can figure out how to create an artificial delay so it’s more likely that Timeline will be loaded in when the synchronous execution of GB happens and keep this as a change for myself.

I am in US Pacific Timezone (UTC-7); for more back and forth debugging I’d be pretty well available starting tomorrow, Saturday, from around 9 A.M.

1 Like

Great. Me, too.

I’ll post 2.1 here shortly with at least one fix (eliminating the maxSpeed stuff and fixing how I calculated the minutes for very short sessions).

I’ll work with you tomorrow for the rest.

Thanks!

@LupoMikti I see the problem with the background color. It’s because I refactored and now update the CSS in the DOM before I load the settings, so the variables in the CSS never get re-evaluated when the new settings are stored. My bad for not testing.

I think I’ll wait until the morning to post a new version. I want to get this fixed for sure. The easiest fix is to not display the section until all the data is loaded, but I want to avoid that.

[Edit] @LupoMikti the dev version at https://github.com/wrex/ganbarometer/raw/main/ganbarometer.user.js fixes most of the problems you were experiencing (including the background color not taking effect). Won’t fix adding it below Ultimate Timeline, though. Very little testing so be prepared to delete it. :slight_smile:

1 Like

Not sleeping well, so I decided to keep messing around with this and discovered that the return value for each Session’s minutes method call in the log was 0.25 even though the start and end times were correct. My unfamiliarity with JS meant it took longer to find the issue than I’d like, but for the return statement of the Session class method minutes() (in the 2.1dev version):

Current: return (endTime - startTime) / 1000 > 15
What it should be: return (this.endTime - this.startTime) / 1000 > 15

Made this change myself and my average seconds per review was correctly calculated again.

Also, I think I found what’s causing the pareto adding to itself issue? Every time updateGauges() is called, collectMetrics() is called; every time that’s called, there’s a call to findSessions(). In findSessions() there’s a forEach loop that will call withinSession() for every review. And for every call to withinSession(), the value of the pareto bucket the review falls into is incremented.

I believe the issue is that the pareto buckets are not reset to the default empty values with each call of either collectMetrics() or findSessions() (depending on where you would think it more appropriate to reset the buckets, since each call to the former makes a call to the latter). I tested this myself by adding a reset to the findSessions() function, just under let sessions = [];. Here’s where JS confuses me: for some reason, I’m not allowed to use a property of the defaults const, or another const or var defined in the same scope as the metrics object to modify the pareto property of the object; I have to use the list literal defined in the metrics object initializer for it to take effect.

In any case, I will come back to this later on to figure out if the remaining issues were from me not understanding metric calculations and not bugs. Edit: I can confirm the problems I thought I had with the weights were due to lack of understanding, not any bugs.

Edit of the edit: actually it turns out it was both: in the difficulty function of the metrics object, the heuristic for misses forgets to call the functions for missesPerDay and reviewPerDay, instead using their names (i.e. the code has this.missesPerDay and not this.missesPerDay() and this causes the heuristic to not be taken into account; same with this.reviewsPerDay above it). Additionally, I think you forget to turn settings.normalMisses into a percent, as I had to divide by 100 in the round function.

Wow! Thanks. I think you found the most egregious bugs. I’ll make the fixes this am.

My refactoring is looking more like spaghetti the longer I mess with it. It’s my first JavaScript program of any appreciable size.

[Edit: aargh. No I won’t be able to get to this this morning. I’ve got another commitment that will keep me out of pocket all day. I’ll post an update tomorrow.]

Heaven knows I’m no Javascript expert (not by a long shot) but I’m having trouble parsing this. If you could provide an example of what you’re attempting and what error you are receiving. With const objects, you can change the contents of the object but you can’t change the reference to the object itself (not my favorite feature of Javascript const provides only the lightest of guarantees).

Regardless, your diagnosis is correct. I wasn’t resetting the metrics between invocations of updateGauges(). The solution was to create a new method called resetMetrics() that is invoked at the very start of updateGauges():

function resetMetrics() {
    metrics.reviewed = 0;
    metrics.sessions = [];
    metrics.apprentice = 0;
    metrics.newKanji = 0;
    metrics.pareto = [
      // name, rangeStart in seconds, count
      { name: `10"`, rangeStart: 0, count: 0 }, // 0 to 10 seconds
      { name: `20"`, rangeStart: 10, count: 0 }, // 10 to 20 seconds
      { name: `30"`, rangeStart: 20, count: 0 }, // 20 to 30 seconds
      { name: `1'`, rangeStart: 30, count: 0 }, // 30 to 1 min
      { name: `1'30"`, rangeStart: 60, count: 0 }, // 1' to 1'30"
      { name: `2'`, rangeStart: 90, count: 0 }, // 1'30" to 2'
      { name: `5'`, rangeStart: 120, count: 0 }, // 2' to 5'
      { name: `10'`, rangeStart: 300, count: 0 }, // 5' to 10'
      { name: `&gt;10'`, rangeStart: 600, count: 0 }, // > 10 min
    ];
  }

Fixed in the dev version.

Fixed in the dev version.

Yup. Another good catch. Fixed in the dev version on github.

THANK YOU for catching these. All of these are silly little mistakes I should have caught if I’d been more thorough with my testing.

I need to figure out some way to add some automated testing before I publish the dev version — I’m certain there are still bugs (I’m uncomfortable with the hacky way reviewsPerDay() is used for example when Interval is set to less than a full day). I also really need to add some caching or go back to sharing review_cache with the heatmap script — the first call to the API each day is far too slow.

And I do want to allow more control over where the <section> gets rendered on the page, as well as more control over colors.

But I’m loathe to do any of that without more automated testing. This script has grown fairly complex and demonstrably easy for me to introduce bugs.

I’m too tired today to do much manual testing (I’ve been driving all day) but I think the dev version is almost ready to publish if you want to give it a whirl.