[Userscript] The GanbarOmeter

Thanks for all the hard work!

I tried adding another property to the defaults const called pareto that had the same list literal as the default one in the metrics const. Then I tried to set metrics.pareto = defaults.pareto; but for some reason, this would not take. I did not receive any errors, and I tried recreating the situation in the W3Schools “Try It Now” thing (first place I thought of that I could use as a quick JS interpreter lol) just so I could verify that I’m supposed to be able to change the property of a const object and confirmed that the basic scenario works.

Once I ran into this issue I realized the ‘clean’ way to handle it is a reset function, even though I settled instead for just pasting the metrics.pareto = [ ... ]; assignment statement into the findSessions() function.

“ganbarimeter”
best.name.ever

chuckling :smiley:

1 Like

GanbarOmeter v3.0 is now posted.

Changes in this version:

  • v3.0 虫を踏む

    • Released 9/26/2021
    • Numerous bug fixes
    • Pace renamed to “Reviews/day”, shows session count, and total reviews
    • Difficulty gauge shows weighted items in bold
    • Settings dialog cleanup (section names)
    • Added settings for pareto buckets
    • Added setting to make immediate loading an option
    • Weighting settings now text boxes (no incrementor) with custom validator

MANY thanks to @LupoMikti , @Zeiosis , @Sinyaven , @Smigedon , @tahubulat , @Redglare, and @rwesterhof for helping me chase down several bugs. Apologies if I’ve missed anyone.

That should be it as far as features go for quite a while. Doubtless there are still some bugs I’ve missed, but I do think I’ve squashed the worst of them.

I’ve taken the Microsoft route of making version 3 the first truly usable version!

1 Like

When you have a moment, could you verify that leaving Display immediately after loading settings unchecked (the default) accomplishes what you want?

For the most part, reordering script loading in Tampermonkey should insert things in the desired order (as long as that box is unchecked). I think there’s still a bit of a race condition no matter what, though, as all these scripts grab data and update the page asynchronously: which goes where depends on who gets their data first.

I’m aware of at least three scripts that try to insert before() the progress-and-forecast div: Ultimate Timeline, Burn Progress, and GanbarOmeter.

I think the only way to guarantee the ordering on the page is if all of them add their sections synchronously, then update the contents asynchronously once data is returned. That way, the order the scripts are loaded will determine which ends up on the top of the page. Otherwise, it depends on the order in which results are returned.

I only control two of those three scripts, though. I’ll add the immediate load option to the next version of Burn Progress as well.

Most of the other fixes were for problems you found (THANK YOU): if you could verify the fixes wrt settings changes and the incrementor-arrows for weighting settings I’d appreciate it. I’m pretty sure they are all working correctly, but another set of eyes would be very welcome.

Would you please verify the new settings for the Pareto buckets in v3.0 work for you?

[It occurs to me that I should probably combine these into a single setting. Instead of a count, an array of labels, and an array of starting times, I should probably make it a JSON array of two-element arrays, but I couldn’t decide what would be easiest for users. The existing settings should at least be functional, though. I will definitely add a custom validator for the Pareto bucket settings in the next bug-fix release.]

1 Like

Screen Shot 2021-09-28 at 07.49.39

it’s great! I don’t know how you did it, but it also loads significantly faster now :smiley:

I’d love to take credit, but I’m pretty sure it’s server-side caching.

The first load after several hours will still take quite a while, I think.

I debated about adding client-side caching, but decided it wouldn’t accomplish anything other than showing old (potentially misleading) info sooner.

I did learn an awful lot about Javascript writing this. Just read a long “best practices” guide that makes me feel a little sheepish about the current state of the code. I’ll probably post one or two more updates to refactor and clean-up the code, but I don’t plan on any new features for a while.

I’m really happy with everything this shows me at a glance now. I suspect it’s still too cryptic for most new users, but it all makes perfect sense to me. <laugh>

[Also: whoa! 941 reviews. If that’s over 72 hours you are indeed going much faster than me!]

I can indeed! The section loads under timeline now. I noticed that timeline might also take a similar approach to you as the section itself loads in fairly quickly before the timeline does. You just go an extra step and fill in initial values to put content in there.

I can also verify that all of the settings are working as they should be and the previous number fields are now text fields with no incrementation arrows.


I’d also like to take a moment to give some final thoughts, but in no way am I expecting these to have anything done about them as they feel a little nit-picky.

  1. I have a distaste for bar graph classes that don’t make the class boundaries immediately clear, and thus prefer to have ‘<’ for each class except the last and a ‘+’ for the last to indicate inclusivity. With the default lables, if you asked someone where they think a value of 10 seconds exactly goes, they’d almost all say the first bucket because it’s labelled “10s”; but it actually goes in the 2nd one labelled “20s” with the way it’s coded (if I’m reading it correctly), so I made a change to the labels to fix that. Unfortunately, this makes the names too long for the default width on my screen (1080p, 24" monitor). I needed to change the width property of #gbSpeed .chart from 15% to 300px to get this. It would be nice to have a setting to control the width of the pareto chart.
  2. I moved the “Daily averages…” text to the top of the section, as it was too misaligned with the center gauge for my liking (also I just personally think it looks better on top).
  3. The middle gauge is actually still jarring. I’m not familiar enough with CSS to figure this out at the moment, but I really want the elements in this section to adjust in width relative to the middle gauge, such that the middle gauge is always centered and the other two adjust as needed.
  4. Moving the <label> to the top made the pareto class labels be right up against the section bottom and that didn’t look good. I added 0.5rem of padding to the .ganbarometer section to address this (and to make it consistent with the padding of the Ultimate Timeline section).
  5. I love this green for the graphs! But maybe others might like to customize the chart colors? ^-^ Just something to keep in mind.

Considering that all of these are small cosmetic gripes personal to my tastes, this script is very much in the useful and useable stage! I’ve taken to using the settings for misses as a “strictness” factor, that is, even though I typically miss 10%, I set it to 5% as a strictness goal for myself (and I also weighted extra misses beyond this at 0.05). I know this means that difficulty percentage isn’t actually ‘difficulty’ anymore because of this, so maybe this can spawn more feature ideas? All in all I really like this script, thank you for taking all of the time to make it and clean it up.

Excellent! Thanks for verifying. And no worries about the feature requests (lily-guilding is my middle name). :grin:

Hmm. I note that ‘<’ doesn’t exactly capture the “from-to” aspect of each bucket either. In fact, technically, “>0” better captures the true meaning of the leftmost bucket than ‘<10’. Since the labels are now under user control, I’ll consider this request fulfilled regardless.

I think a user setting for this preference (on top or on bottom) is small enough that I can add it without cringing too much at violating my self-imposed “no new features” mandate in the next release.

I’m unclear if you are bothered by the horizontal alignment of the “Daily averages for the past 96 hours” text, by the fact that the green dial kinda drags your eye to the left, or by the fact that the middle gauge isn’t in the exact center of the section.

The “Daily averages” text is horizontally aligned to the middle of the entire section, which, I agree, looks a little odd.

My coding skills are weak enough, but my graphic design skills are far worse!

I’m unsure what the best layout might be. I could align the “96 hours” text with the middle gauge, but it actually applies to all three widgets (and least of all to Difficulty). I could separate the text from the widgets with a horizontal rule, but I fear Edward Tufte would find me and hurt me.

I do think it makes sense for the review interval bar graph to be wider than the other gauge widgets (it presents more information, and I need room for the bucket labels). This means the center gauge cannot be centered within the section. The current (fixed) width of the Pareto chart was chosen so that the design is properly responsive and renders well even on very small phone screens.

Doubtless, someone with better design sensibilities could find better places to display both the “96 hours” information as well as the “average 15s” information for the review intervals (I shoved that in somewhat randomly). I’ll continue to play with the overall layout and see if my artist daughter has any suggestions.

[EDIT: @LupoMikti : I think I have a solution: I should copy the layout of the progress-and-forecast div. That uses display: grid with a 6 column grid, while I’m currently using display: flex for my section instead of a grid. If I copy the grid layout with the same media-query breakpoints, I can get the gauges to align with the Lessons/Reviews and the Pareto to align with the Review Forecast. That should eliminate the unbalanced look.]

Welcome to my pain! :slight_smile:

I understand just enough CSS to be dangerous. Small tweaks can propagate in weird and wonderful ways, especially when you consider multiple screen sizes and “responsive” designs.

Absolutely. Further theming customization in the user settings is already “on the list”. But I thought the background color was most critical (for those that prefer dark themes vs. the WK defaults).

I plan to focus on bugs, code clarity/quality, and performance (in that order) for the next few “dot” releases. But I will add proper theming eventually.

This is very much in the spirit of the design. The difficulty gauge is absolutely a personal preference (as is the desired number of reviews/day). I expect people to tweak these settings to their own preferences.

The default values for Difficulty very much reflect my own personal preferences. I find early-stage kanji hardest, but that is far from universal (some prefer kanji to vocabulary).

My bias (personally and professionally) has always been toward KISS. I really don’t want to add more settings than absolutely necessary, but as long as the defaults are reasonable I’m not too strict about it.

It seems reasonable to me that some may not want to weight the number of new kanji or “extra” misses at all, so I’ve allowed those weights to be set to zero if desired. But I’m loathe to add extra weighting for, say, early radicals or early vocabulary (for example). There are any number of further features and tweaks that could be performed, but they invariably come at the expense of even more complexity (and backward compatibility costs down the road).

You’re very welcome.

This has mostly been an excuse for me to learn Javascript, WKOF, and the WK API, but I’ll admit that this has been an extremely satisfying project. I find it quite useful, and enjoy seeing this information at the top of my screen every morning.

I’m glad to hear others do as well!

Aha, this one was less a request and more just a description of why I did it. Oh, and that’s a fair point, I just consider the special case of the x-axis being time comes into play here (can’t be negative, so one knows that 0 is least it can be). But making all of the bucket names use “>” is definitely the best personal solution and matches the code exactly, so thank you for pointing this out ^-^

I can’t say I understand CSS beyond knowing what grid is in concept, but if it works that’s great!

Funny you mention this, I was actually just thinking about how much I wish vocab was included because I tend to struggle with it the same amount as kanji (except in readings, I miss on’yomi more than kun’yomi).

image

But I completely get what you mean about not going overboard with all of the settings. I think I might try and modify my local install to do vocab and make settings for it.

Unless you want vocab to be weighted more heavily, you should probably just set the new kanji weighting to 0.

You’re welcome to fork your own version of the code, of course, but beware that you won’t get any further updates I make.

[I also note in your wkstats that your overall accuracy with kanji is significantly lower than for vocabulary or radicals. You’re missing readings more than meanings for both kanji and vocab, but does _why_ you miss them really matter? I’d honestly recommend sticking with the defaults for a while (at least until you have a significant quantity of items in Master stage – for me that was around level ten or so – or better until you start burning items – around level 17 for me).]

This comment did prompt me to take another look at the code and thanks to your helpful comments in it I understand that I don’t need this and the default works fine (the line “harder than other apprentice items” is what did it; somehow, I forgot that the weighting had to be in comparison to the other items and was under the impression that vocab wasn’t included in the metric at all).

Ah, so the thing is, I’m like a returning student haha. I had a large pile of reviews I finally got down after a month and now I’m back to learning new material. I already have 462 burns for example (I plan to resurrect some of them eventually but there are a fair number of ones I do still remember immediately when I see them). Here’s my heatmap to show I haven’t been a very good student xD

But yes, I do struggle with kanji readings the most. In fact I’m starting to wonder if there are any scripts or 3rd party sites that can give me a list of all my items (kanji + vocab) ordered from least % correct for readings to most (with a threshold of, say, 90% and above where it doesn’t return any items) so that I can specifically study those.

I thought you seemed awfully knowledgeable about the SRS compared to others at level 6. I think I’ll give you the nickname 一級(いっきゅう)さん.

[EDIT: I had the kanji wrong. I just discovered that the real monk’s name was 一休(いっきゅう)さん but I was thinking of the anime character of the same name. That kanji (rest vs. level) works even better, though!]

1 Like

Aha, thanks for the nickname! It’s quite fitting indeed ^ ^

So, I think I have one last bug to report. I noticed that the final bucket of the pareto kept going up by one each time I finished a review session. I definitely haven’t been spending 10+ minutes on any items recently so that struck me as odd.

I think what’s happening is that the final item of any session is being put into the last bucket whenever the time between sessions is greater than the maximum set in the settings. Let me see if I can rephrase that for clarity.

Say I have 20 items in a review session. An hour passes after completing it and I do another review session with 10 items. I believe the script is putting the final item of the 20 item session into the rightmost bucket because the time between it and the start of the next session is greater than the specified maximum. I think this is a result of the withinSession() function incrementing the pareto just after it calculates the time difference. The time difference will be more than the maximum, it increments the last bucket, then it returns false for being within the session.

I think the fix is that the incrementing needs to happen after determination of whether something is within the session as defined by the user settings and the final bucket of the pareto should not be a >= but should have a maximum the same as the user given setting. Also for clarity, I think the findSessions() function is working correctly in that it properly separates sessions and has the correct review counts for each session.

I’ll look at it more closely, but I think the algorithm works as intended. In the scenario you describe, there are 30 reviews in two sessions. 20 performed within 10m of each other, a long gap before the second session starts, then 10 more in quick succession. The algorithm should return two sessions (one of length 20, and one with length 10) and one item in the “>10m” bucket (the first review of the second session).

Remember that the API only returns the start time of a review, and since the algorithm walks through the reviews in consecutive order, it can only know how much time has elapsed between the current item and the PRIOR (not next) item.

The else clause in the logic triggers when the current review occurs more than Interval minutes after the prior review. When that happens it starts a new session with that item as the first review in the session, and updates the final bucket counter. I don’t see how it can matter whether the counter is incremented before or after creating the new session: it saw an interval of the required duration regardless.

Am I missing something?

Please let me know if the scenario you describe displays anything other than two sessions and one item in the final bucket.

In my testing, the final bucket counts were always one less than the number of sessions which seems correct.

[Also, I’m not at my computer atm, but the withinSession() function should just return a Boolean value based on the arguments passed. If it touches the Pareto buckets or has any other side-effects at all, it’s a definite bug. I’d be very surprised if I messed that up, though.]

It doesn’t seem like it; I just wasn’t under the impression this was intended behavior for the pareto (I knew it was for finding and defining Sessions). In that case I just have a couple questions. Are these values correctly removed from the average seconds calculation? Is it possible to make it so these values aren’t shown in the pareto at all?

Since there isn’t a possibility for distinguishing a review longer than the defined max interval from a review at the boundary of two sessions (because indeed, there is no distinction; if you sit on a review item for 10m+ and your set max interval is 10m, it might as well be a new session even if you didn’t actually start a second one), then the final bucket isn’t a very useful bar to have when it represents values more than your max interval setting. Thus, the chart buckets should have an imposed maximum the same as the user set max interval setting.

One way I can think of accomplishing this, is to always create a starting point in the final position of the pareto array equal to the max interval in seconds if there is not already a value in the array greater than or equal to it and then adjusting the value of bucketCount to not display this bar (or if bucketCount is already 10 and this extra addition to the array creates 11 buckets, then no change to bucketCount is needed; special scenario to account for: bucketCount is 10, array length is 11, 10th value of array is not greater or equal to max interval, thus, do nothing).

I feel it’s less a bug and more an unintentional break from the functional coding convention you were trying to do. But yes, withinSession() has the side effect of being responsible for incrementing the pareto buckets. It’s very C-like to use the same function you call in the expression of an if-statement contained in a forEach loop to also increment another object; but, it is very much not in line with the functional programming paradigm.

Well, that was certainly my intent.

The bar chart shows the breakdown of intervals between all reviews in the past 72 hours (by default). This includes the intervals between sessions.

Each session has its own method to return the number of minutes consumed in that session (literally just the difference in time between the start of the first and last reviews in that session).

The overall “average seconds per review” shown is intended to be the total number of reviews over the past 72 hours divided by the number of minutes in each session — it should NOT include the time between sessions.

The bar chart should show all intervals, whether within a session or between sessions, imo (since the minimum interval between sessions is a user setting). The average speed, however, only makes sense if it ignores the time between sessions. That’s how I intended it to behave, anyway.

The primary value of the Pareto chart is to help you decide on a reasonable setting for the minimum interval between sessions: It’s not uncommon to take a phone call, pour a cup of tea, or whatever while doing your reviews — the bar chart lets you see if any of these sub-10-minute but longer-than-a-few second intervals occurred (if so, you can decide whether or not to consider them the starts of new sessions).

I fail to see any value in hiding the number of “between-session” intervals. That could be multiple buckets depending on a user’s settings.

I’m in the middle of a major refactoring of the code to make it much more readable, as well as an attempt to write some tests, but you definitely discovered another pretty embarrassing bug — withinSession() does indeed have a side effect (and it absolutely shouldn’t).

I have no idea how or why I managed to put the code to update the pareto counts in there.

Here is withinSession() in its entirety:

// Determine if newTime is within maxMinutes of prevTime
  function withinSession(prevTime, newTime, maxMinutes) {
    let timeDifference = newTime - prevTime;

    // increment appropriate pareto counter
    for (let i = metrics.pareto.length - 1; i >= 0; i--) {
      let bucket = metrics.pareto[i];
      if (timeDifference >= bucket.rangeStart * 1000) {
        bucket.count += 1;
        break;
      }
    }
    return timeDifference <= maxMinutes * 1000 * 60;
  }

I don’t know what I was thinking: that for loop has no business being inside of that function. I’m honestly shocked to find it there. It has no relation to the return value whatsoever. The purpose of that function is simply to answer whether newTime is within maxMinutes of prevTime.

Despite this rather egregious placement of code and poor separation of function, I think the overall script is still behaving as intended.

As I said, I’m going through some very thorough housecleaning at the moment (this code has grown quite long and complex very quickly). I don’t want to push a new version until that is done unless there is an actual bug in behavior.

If you believe something is being reported incorrectly, please let me know and I’ll push out a fix in the interim.

I just updated to version 3.1.

This is just a one-line change: I stopped requiring review-cache as I’m not using it in the script currently.

This may actually eliminate the long pause before the meters are displayed for many people.

I’m still planning a major update to this script, but I’m deep in the throes of a massive amount of learning. I’m about halfway through an excellent udemy course on TDD with Svelte (highly recommended, FWIW).

Are you saying that your script is faster without the cache? This is strange. The point of a cache is to accelerate scripts by avoiding network delays. Do you imply review cache is unable to do so?

Sorry for the misunderstanding, that’s not what I meant.

I was using the cache (shared with heatmap) in an earlier version, but as I’m only fetching a few days of reviews at most I decided to just retrieve directly and not cache, so I removed all calls to the review-cache functions. But I forgot to remove the require.

The nature of the script is that you’ll invariably want the most recent data anyway (it’s most interesting immediately after performing some reviews) and with only a few hundred reviews retrieved at most, caching isn’t a huge win (unlike with heatmap which might retrieve tens of thousands of reviews depending on your settings).

This was just a trivial change, but it appears that just requiring the review-cache in the script was causing an appreciable delay even though I never called any of the functions in that module. I’ve not looked into why that might be the case (and it might not be significant, either – I’ve not measured anything).

1 Like

Hey there! Sorry for the delay in any response. I couldn’t think of how I wanted to word it and eventually time to dedicate to this slipped away from me.

In any case, I’ll address a few things. Firstly, thank you for confirming that the average seconds per review is properly calculated. I also took a closer look at how that is calculated and I understand it much better now. A side note, would you consider using (...).toPrecision(2) instead of Math.round(...) in the return statement so that if the value is ever less than 10 seconds it can show a decimal value? I found it much more useful to see a value of 9.8 seconds than for it to be rounded to 10, and so precision to two significant figures feels the most appropriate here as it will still round to the nearest integer for double digits.

My point with this was that it does not add useful information and at the same time provides misleading information, or at the very least information that can mislead. It makes the user think those reviews are being counted in the averages because of their presence in the chart for one, and it gives the impression that there are actually review items that have taken them so long when they haven’t.

Additionally, the number is always 1 less than the actual number of sessions, making it further useless to show because the number of sessions is shown under the “Reviews per day” gauge. Even if this is split into multiple buckets, I do not see why this information should be in the review pareto over it’s own graph/chart (the information it would provide is a pareto of how long your breaks between sessions are).

Also, one can consider the following scenario:

Scenario

If the user sets a max session interval of 15 minutes, completes a review session and starts a second one 10 minutes later, and their chart buckets have separations for 10m+ and 15m+, then that review will show in the 10 minute bucket even though it did not actually take them 10 minutes; thankfully it won’t be part of the average, but I cannot imagine that this should be a desired outcome, and I don’t find it reasonable to expect the end user to understand they would need to set a shorter session interval to prevent that. Then, if they do set a shorter interval, we return to the original issue of the final bucket always showing 1 less than the number of sessions and not only failing to provide useful information, but instead providing redundant and potentially misleading information.

So, in summary, I believe the chart should always cut off at the user chosen max session interval; charts that can mislead should be changed to eliminate that as much as they can be in my opinion.


Now, to not end on a heavy note like that, I’ll take a moment to say that I really do appreciate the insights this userscript gives me. For example, once I got out of doing old reviews and back into learning new material, the data being shown to me really changed a lot! I was able to see that my chosen number for max reviews and desired number of apprentice items were spot on as the gauges sat near the middle for quite some time and I felt very appropriately challenged too.

I also got to see how the review times changed. They went from me having over 100 reviews under 5 seconds and the rest falling off like a graph of 1/x, to the 0+/5+/10+ buckets flattening out, to now the [5s, 10s) bucket containing the vast majority of entries. This has really allowed me to monitor my learning so that I can make adjustments if needed and that’s very valuable, so thank you so much for it.

I look forward to the future of this script and its further improvements and developments!