[Userscript] The GanbarOmeter

Isn’t that the same with all dashboard scripts?
Also, not sure it can be accomplished easily as the scripts are inherently tied to the screen being rendered :thinking: (but curious to learn more!)

3 Likes

I don’t know if this a bug or not but i don’t remember doing a review more than 1 minute let alone 5 :face_with_raised_eyebrow:
Is there something i have to change in the settings?

1 Like

first of all, thanks for making this script! it’s a fun motivator for me to do my lessons.

One thing, may I ask how the speed’s calculated? Because even taking into account the 72-hour average, I get about 12 s/r from the Heatmap script (1), but your speedometer gives me 70 s/r.

(1) 1209 r, 4h9m (pulled from heatmap from last three days), 14940 s/1209 r gives about 12.357 s/r.

Inherently yes. But a custom redraw upon save (WKOF settings allow a callback) can update the panel without having to reload the whole page. Especially if you have multiple scripts running a reload can take a few seconds.

1 Like

I think I saw a setting that mentions the length of time between review sessions. Yep, ‘Session interval’.
Especially for people that review a few items, then handle a phone call before doing a few more reviews, a large session interval can make all that time count as ‘doing a review’.
Try shortening it to 5 or something and see what happens.

That’ll teach me to post a script just before going to bed. [Edit: not to mention replying before I’ve had my first coffee.]

I forgot that the first several people to try out a new script would be level 60s. I think this script is most useful for people between roughly levels 8 and 60, but I’m hopeful that it will eventually be useful for everyone.

Yup, that was my intended behavior which I intended to document in the first post (but cleverly forgot). I will edit the intro to mention this.

I’ll look into @rwesterhof’s suggestion of using a callback to refresh without going into an infinite loop. Currently, I load and update the settings on every refresh.

As described elsewhere, I think this is likely due to the Session interval value being too high (defaults to 10 minutes). Please set it to something like 2.5 minutes and let me know the result.

Also, could you enable debug then refresh and show me what’s printed in your JavaScript console? [Right-click, “inspect”, then click “console” in the developer window that appears.]

I first go through all the recent reviews looking for “sessions” (reviews done one after another without too much time in between each review). Each session tracks how many minutes elapsed during the session.

The reported speed is the sum of all minutes from all sessions divided by 60, divided by the total number of reviews performed across all sessions.

The default maximum between reviews is 10 minute. A lower value might be wiser (say somewhere between 2.5 and 5 minutes). I will almost certainly lower the default in a future version. (Since the default maximum speed is 30 seconds, I’m currently thinking a value of 5X to 10X the maxSpeed might be a reasonable default.)

I think I just corrected a bug in the logic, though: in v0.1, if a session only had a single review, the minutes() for that session would report as zero. This is because the API only reports the start time of a review, not how long it took to submit an answer. So start and end time of a session with only one review will be the same.

I v0.2 and later, I use maxSpeed/2 as an estimate for very short sessions (which I think can only happen with one review).

It appears both scripts agree on the number of reviews, so the difference must be in the number of seconds it took to perform those reviews. Heatmap is using a value of 14,940 seconds as you point out. GanbarOmeter must be using a value of around 84,630 seconds (23.5 hours of review time over 3 days, which makes no sense).

I’d really like to see your console output with debug enabled: The minutes() value returned for one of the sessions must be way off for some reason.

Also: 417 reviews/day over three days!! Yow. I’d be whimpering for sure. :grin:

The defaults are definitely aimed at slow-and-steady users like myself. Speed-runners will likely want to bump up some of the values. In your case, you may want to set:

  • Desired apprentice quantity to 300 or higher so the gauge doesn’t peg. You’ve currently got 296 apprentice items. The goal is to have the gauge show roughly 50% (needle straight up) for the desired difficulty level. With the defaults, someone with exactly 100 apprentice items, answering fewer than 20% of reviews incorrectly, and with no kanji in Apprentice1 or Apprentice2 would cause the gauge to display exactly 50%. With 296 apprentice items, you’re already pegging the meter without accounting for misses and new kanji.

  • Maximum reviews per day to 800 or so so. With the defaults, the gauge will show 50% at 150 reviews/day.

Can you enable debug, refresh, and send me your console output?

I should leave debug on until it’s had a bit more usage. New version coming up.

I need to look at the code rather than my phone. I’m pretty sure I have another bug in my speed calculation logic.

Version 0.5 is now posted.

Changes:

  • debug enabled by default. If you had installed an earlier version, please go into settings → GanbarOmeter, click the checkbox for debug, save your settings, and refresh your browser window.

  • Sessions with Session.minutes() no longer returns zero for sessions with only a single review. The minimum session time is now maxSpeed/2 seconds.

  • Changed debug log output slightly: delimits the start and end of GanbarOmeter logs, and includes the current settings.

  • Displays the number of misses per day in the Display gauge (as well as the number of new kanji)


Only a complete idiot would copy then forget to paste the link target! :roll_eyes:

1 Like

Love it!

1 Like

my bad, I was asleep! I’m not sure what’s happened, but I’ve updated to v.0.5 and now the whole bar won’t even load; it gets stuck loading reviews, apparently (it’s been like this for quite a while)

Screen Shot 2021-09-20 at 07.14.15

And the console output (I think)

I’m not sure if this is really helpful because the script doesn’t seem to actually load, though.

Update: I’ve tried downgradiing to v0.4 and v0.1. It’s still stuck.

What other scripts do you have loaded?

Do me a favor and try disabling every other script except Wanikani Open Framework and Ganbarometer, then quit and restart the browser (not just refresh).

I’m wondering if there is a weird interaction with another script (still my problem if so, but trying to debug).

The only other hypothesis is if the start date for requesting reviews is being calculated incorrectly. At level 60 you can have a LOT of reviews (the script attempts to retrieve just the last 72 hours worth).

Let me know if none of the above works and I can send you a one off script that just logs befor attempting to retrieve the reviews.

1 Like

restarting & isolating seems to have done the trick (restarting probably did the trick, I’d already tried refreshing and isolating). It seems to still load quite slowly alone compared to other scripts, oddly.

1331 reviews in 72 hours
104.7 misses per day
1497 total minutes
3 sessions: 
     - Start: Fri Sep 17 2021 12:05:21 GMT+0700 (Indochina Time)
       End: Fri Sep 17 2021 16:52:35 GMT+0700 (Indochina Time)
       Misses: 110
       Reviews: 441
       Review minutes: 287 
     - Start: Sat Sep 18 2021 11:41:19 GMT+0700 (Indochina Time)
       End: Sat Sep 18 2021 22:01:29 GMT+0700 (Indochina Time)
       Misses: 92
       Reviews: 391
       Review minutes: 620 
     - Start: Sun Sep 19 2021 09:32:33 GMT+0700 (Indochina Time)
       End: Sun Sep 19 2021 19:22:51 GMT+0700 (Indochina Time)
       Misses: 112
       Reviews: 499
       Review minutes: 590 
296 apprentice 2 newKanji 
444 reviews per day (0 - 300 
67 seconds per review (0 - 30) 
Difficulty: 1 (0-1) 
Load: 1 
Speed: 1 

although, just glancing at the output data, I’m pretty sure I’ve never had a review session longer than 3h, let alone 10h :stuck_out_tongue:

set to 2.5 and disable all scripts except wanikani open framework

i guess i’m a little bit confused, does “review minutes” means time spent on a single item or a session, because based on the data i think it means time spent on a review session

Yeah, unfortunately that’s unavoidable. It takes a few seconds for the API to return the 1331 review items, so the gauges might not appear for a while.

I’ll try to improve a future version to render SOMETHING quickly (a “loading” message of some sort, then update the gauges once the data is retrieved from the API.

Each session comprises one or more reviews, each with a timestamp of when the review was started. The “review minutes” reported in the debug output is the difference between the timestamps of the first and last review within the session. This isn’t exactly accurate, because the timestamps are when the review of that item started — I’d really like to have the timestamp when the last review ended, but that data doesn’t exist.

The second to last session shows, for example, that you reviewed 100 items and answered all but 3 of those items correctly. The first item in that session was reviewed at 18:14 local time, and the last item was reviewed at 23:11, a span of 296 minutes.

Does that sound correct, or is the script calculating something wrong?

EDIT: Wait, clearly something isn’t being calculated correctly. With a max session interval of 2.5 minutes and 100 items, the longest possible session time would be 250 minutes. This info should help me track down the problem.

Hmm. Let me think about this and look at the code. It appears I may have a bug in the logic that finds sessions.

Were you doing reviews at both 9 am and 7pm on Sunday? The last session entry shows those as the beginning and ending timestamps. I suspect that this should be counted as multiple sessions.

Please upgrade to v1.5 which reports the settings now that it’s working again. Was this output using a Session interval of 10 minutes? Is there any chance you did at least one review every 10 minutes from 9 am to 7pm Sunday?

Bear with me while I figure this out…

1 Like

I think so, yes.

Yup.

I don’t think so, given that I was in classes from 10am to 3pm…

v0.5? :slight_smile: I updated and restarted Firefox, and it’s been loading for quite some time now… (something like 10m? or maybe 13.68m)

update: 15m

I think the issue involves date calculations with different timezones. It could be affecting the review load as well.

Please disable for a bit, and I’ll post a new version shortly as soon as I figure out what I’m doing wrong with Date objects.

Thanks for reporting!

1 Like

Please update to v0.6 (just posted) and try again.

I’m no javascript expert and the whole dynamic typing thing is just breaking me. Date objects in particular are just weird: it’s hard to know what’s a number and whats a Date, making math unnecessarily difficult, IMO.

Anyway, I THINK the source of the problem may have been timezone issues coupled with how I was using the Date objects. I was definitely doing something truly weird with the options I was passing to wkof to fetch the review objects. I’ve simplified that significantly.

Not sure if it will fix the problem, but here’s hoping.

If you and @tahubulat could both try version v1.6 and report back, I’d appreciate it.

I’ve updated the script to v0.6 and restarted Firefox, but it’s still stuck on loading. :frowning:

Well that is certainly not alarming at all :sweat_smile:

1 Like