[Userscript] The GanbarOmeter

I review in 1x1 mode (reading and meaning are back to back) configured such that the reading always comes first, so I don’t need any indicator at all

The default CSS changes between white-on-black and black-on-white behind the full-width bar containing the words “meaning” and “reading”. That narrow black/white bar provides a much better trigger for me than the English word itself when my brain is in Japanese mode. (As I mentioned, even that’s a bit too subtle for my brain since the containing element is a fairly narrow bar — I’d prefer something in-your-face obvious like how the whole top section changes for different subject types (radical/kanji/vocabulary).

I was wrong, though. I wasn’t seeing any change in background color because I accidentally had both Breeze Dark and the Chrome Dark Mode extension enabled on the review pages.

Once I disabled Chrome Dark, I was able to see this not-subtle-at-all, in-your-face, high-contrast change in background colors, obviously enough to get my brain to switch modes:

Screen Shot 2022-01-13 at 10.08.06 AM

(That was sarcasm in case it wasn’t obvious. :wink: )

I do prefer dark mode in general, and I don’t blame the Breeze Dark author (@valeth) or current maintainer (@artemigos ?) at all for this — theming is hard, graphic design even harder, and the Breeze Dark theme is stunningly beautiful. This admittedly tiny loss of functionality is still a show-stopper for me, though. I’m going to either have to figure out how to fix it myself or give up on Breeze Dark.

Interestingly, Breeze Darks changes between radical/kanji/vocabulary items appear more in-your-face to me than the default CSS (and strongly preferable). It’s just the meaning/reading change that becomes (much) more subtle.

(I’ll probably try to patch the theme if I can: Turning on dark mode instantly relieves a ton of unrecognized stress by eliminating those huge swaths of white. It’s really something.)


In other news: v4.0.8beta (release candidate) will likely get published today or tomorrow.

2 Likes

I feel called out :smiley:

It’s time to pour some effort into Breeze Dark again. I tried to gather what issues you’re having, what I see so far:

  • reading/meaning backgrounds are too similar - this one is simple actually, this is one of the few things that can be configured:
    image
  • broken answer background in self-study quiz - I promised myself that I would review how all the supported user scripts work, I think I will start with this.
  • in general you seem to fight with Breeze Dark styles a lot to get what you want - I can’t promise anything here. I’m willing to help if I can, but I’m not sure how to approach this. In theory I could scope the styles more restrictively, but I can almost guarantee that it’s going to break something else for someone. That’s the unfortunate part of styling UI that I have no control over (or in case of user scripts - no knowledge of most of the time).

If you have any particular pain points just let me know. I will at least mull over if and how I can do something about them.

2 Likes

Yikes! Not my intent at all. Breeze Dark is awesome.

I’m showing my ignorance. I didn’t even realize Breeze Dark HAD configuration options. That solves my first and only real issue completely.

I’ve got a brute force CSS workaround for the second item (self-study quiz). I just threw in these two CSS rules into my top level svelte component’s styling:

:global(#wkof_ds #ss_quiz[data-result="correct"] .answer input) {
    color: #fff !important;
    background-color: #8c8 !important;
  }

:global(#wkof_ds #ss_quiz[data-result="incorrect"] .answer input) {
    color: #fff !important;
    background-color: #f03 !important;
  }

That’s really it. The last item was mostly me figuring out how to even accomplish themes/styling.

Thanks for the reply!

Beta version v.4.0.8beta is now available. Those that have installed a previous beta should get the update automagically when Tampermonkey notices the change (utilities → check for userscript updates to force an update).

This is the production release candidate.

Changes since the last beta:

  • Updated all the underlying npm packages to @latest

  • Fixed all the tests and wrote a few more. Then broke them all again by installing more svelte plugins (mdsvex and svelte-number-spinner). Sigh.

  • Sets the duration of the last review in each session to the median of all reviews retrieved

  • Changed units to “qpm”, “spq”, and “rpd” (questions-per-minute, seconds-per-question, and "reviews-per-day) throughout.

  • Added in situ help documentation: There’s now a Help menu, and the settings have little “info” icons to pull up in-context help.

  • Much theming. Should look reasonable everywhere in default or dark themes now. Looks quite nice with Breeze Dark.

  • Also added some CSS overrides for the self-study quiz so that correct/incorrect answers highlight properly.

  • First attempt at migrating settings between versions where possible. Moving from any 4.0.X version to any other 4.0.Y versions should preserve most user settings.

  • Since the GanbarOmeter value directly correlate to the number of items in different SRS stages, I’ve changed the labels to 少 and 多 (more sensible than slow/fast).

  • Changed to a Svelte NumberSpinner with three digits of precision for the different SRS weights. You can use the mouse or the keyboard to adjust the values. This was purely to keep @Kumirei off my back until I create the super-complicated version in 4.1. :stuck_out_tongue:

This should have fixed every issue I’m currently aware of other than some infrastructure stuff (borked tests). Please let me know ASAP if you discover anything that needs fixing (minor or not).

I’d also appreciate any reviews of the Help text. It’s longer than it needs to be, but hopefully it explains things well enough.

Enjoy!

1 Like

It doesn’t really need fixing, but data was loading on 4.0.8 beta, I encountered this amazing speed:

I’m glad to see this in the script now; it will help make this script more understandable for new users.

To infinity and beyond!

(I had no information to display yet and nothing cached in the new format. Displaying “infinity” makes me happier than “zero”.)

Now that all the important bits are done, I’ll tweak the Settings form CSS to look like the following in production (not in 4.0.8beta):

Default:

Breeze Dark:

2 Likes

I’ve been using the 4.0.8 dev version for several days now, and love it. The GanbarOmeter calculation and Expected Daily Reviews line in the bar chart have been working really great for me.

Buuuuuuuut, I’m already thinking about 4.1! :grin:

There’s really no rush to come out with anything better. But there is one remaining concern I’m still thinking about for 4.1:

If a user slacks off and doesn’t do all their reviews for a few days, nothing in the displays would nag at them, even though they’d be creating potentially exponential headaches for their future self.

Both the GanbarOmeter and EDR calculate their values from the SRS stage distribution of upcoming assignments — they don’t look at anything else. In particular, they don’t even look at when assignments are scheduled (only their current stage).

After going back and forth in my head several times, I’ve decided that both the GanbarOmeter and Reviews chart are absolutely fine as they are. No further tweaks are planned nor needed. (I’d understand if people are skeptical!)

I do, however, want to add some sort of warning if reviews start piling up for the next few days. I’ve not figured out exactly what I want to show, but maybe even a simple textual warning suffices.

After lots of pondering, I think what makes the most sense is to simply count how many reviews are scheduled today, tomorrow and the day after. If any one of those three numbers is greater than the target maximum reviews-per-day, display a warning. If the average of tomorrow and the-day-after is above the max, display a particularly stern warning.

It doesn’t make sense to go beyond the day after tomorrow as you won’t see any early stage items scheduled more than a couple days out.

Where would you like to see such a warning, and what should it look like? Currently, I’m just thinking of a text warning below the Speed gauge, but I’m open to ideas.

2 Likes

Probably would make the most sense in the Reviews section, but I don’t know how it could be accomplished without increasing the vertical space of the script… maybe as an aside to the right?

1 Like

There’s plenty of room up top. Maybe a throbbing “Warnings” menu item that conditionally appears up top?

The beta is working out pretty great!

I’ll compile any suggestions I have for a future message. From what I can tell at a glance a lot of my ‘proofing’ suggestions will center around word choice as many of the words used are what I’d consider too advanced for the average user (speaking as someone whose job involves user acceptance testing and attempting to write helpful explanations of things in our internal system + training videos). For example, you have the phrase “These ‘widgets’ display distilled information” and that’s already going to lose people because of the word ‘distilled’.

It seems this still isn’t working for me as my single-review sessions still show 0/0 etc.

Turns out I end up with a fair amount of these kinds of sessions because when I miss an item in review it’s usually only 1 item haha

This worked just fine for me, although I have to say this usually worked on its own when tampermonkey would actually update the script instead of me having to manually reinstall from the URL. But it’s nice to have it more guaranteed now. I was getting pretty tired of needing to retype my settings over and over tbh.

Please do compile any suggestions. It’s (mostly) just a markdown file. If you’re familiar with git/github you could even clone a copy, edit, and make a pull request: ganbarometer-svelte/Help.svx at v4.0 · wrex/ganbarometer-svelte · GitHub

Alternately, just cut and paste the markdown bits into an editor and send it to me (rw at pobox.com) and I can make the diff.

I despair if “distilled” is considered too difficult for English speakers learning another language, though. Just wait until they get to filial piety.

Hmm. This is odd. The fix should be included. Are you certain you’re running 4.0.8? (Obviously you are or you wouldn’t see the Help text.) Does it still show 0s no matter how many days of reviews you retrieve?

I think I still have an API key for you so I’ll look into it. Can you cut and paste all the session info and send it to me in the interim?

It only appeared that way. A few releases ago, I added a VERSION string to the gbSettings variable stored in localstorage. Now, whenever I make an incompatible change to the object structure of anything stored locally I update the version. This doesn’t happen every release.

When I first put in the version check, I just reset to defaults if there was a mismatch. This release copies over what I can if the version changed (and just adds anything new).


Re: sessions with single review not getting their duration set to the median.

I’m mystified. I even created a test for this specific case and everything. It’s passing:

Here’s the test itself:

it("sets the duration of single review sessions to the median", async () => {
    // median(infinity, 1000, 2000, 3000, 4000, 5000, 30000) === 3500
    mockReviewCollection([
      mockReview({ reviewData: { created_at: "2019-10-01T00:00:00.000Z" } }), // session with single review
      mockReview({ reviewData: { created_at: "2019-10-04T00:00:00.000Z" } }), // 2s duration
      mockReview({ reviewData: { created_at: "2019-10-04T00:00:02.000Z" } }), // 1s duration
      mockReview({ reviewData: { created_at: "2019-10-04T00:00:03.000Z" } }), // 5s duration
      mockReview({ reviewData: { created_at: "2019-10-04T00:00:08.000Z" } }), // 4s duration
      mockReview({ reviewData: { created_at: "2019-10-04T00:00:12.000Z" } }), // 3s duration
      mockReview({ reviewData: { created_at: "2019-10-04T00:00:15.000Z" } }), // unknown (30s)
    ]);
    const reviews = await getReviews(7);
    const sessions = parseSessions(reviews);
    expect(sessions.length).toBe(2);
    expect(sessions[0].reviews.length).toBe(1);
    expect(sessions[0].reviews[0].duration).toBe(3500);
  });

It first mocks WKOF to return seven Reviews (a single session of one review, then another session with six reviews).

Then it checks that there are two sessions, that the first first session has a single review, and that the duration of the first session is set to the median value.

Here’s the specific section code that implements the logic (and I’ve verified it’s present in v4.0.8beta):

EDIT: whoops copy/pasted the wrong code. Previous one only set the final review retrieved. Here’s the code that does it for the final review within each session:

export const parseSessions = (reviews: Review[]): Session[] => {

  // ... other logic

  // Force the duration of the last review in each session to the overall median
  sessionSlices.forEach((sess) => {
    sess.reviews[sess.reviews.length - 1].duration = median_duration;
  });
};

Clearly you’re retrieving more than a single session, so I’m mystified. Let me write another test where it’s the second to last session that has a single review (matching your case exactly).

I can’t imagine how it would affect things, but you mentioned you had to keep changing your settings. Could you tell me what changes you’ve made from the defaults?

1 Like

Correct, all of the previous days are also showing zeroes. I will send the copy/paste of the session info in an email to you.

Ganbarometer: 100 - 180; labels changed to 遅い、良い、速い; rad12 = 1.05, kan12 = 3, voc12 = 1.5, app34 = 0.75, guru = 0.9, master = 0.25, enl = 0.01; no quizzes selected

Speed: past 7 days; 12 - 20 qpm; 40 - 120 rpd

Appearance: dark theme selected with changes to Track, Text, Fill, hlTrack colors

1 Like

“All of the previous days”? I don’t follow.

A value of 0.0 spq (infinity qpm) indicates it thinks the total duration of a session was 0 seconds. You’ve retrieved 338 reviews. The header was cut off in your screen shot, so I’m unsure how many sessions it found, but the most recent session with 10 reviews had reasonable spq/qpm numbers. The session before that with a single review had a duration of zero, though.

How many sessions did it find? And how many of them show 0.0 spq? This is very weird, it’s behaving exactly like you didn’t get the fix I showed above (but I can’t figure out how that could be).

I’ve not received the email yet. I don’t think I ever requested an API key from you, either, please send one if you don’t mind (you can delete/disable it as soon as I debug this problem).

Oh that’s very strange if you haven’t been getting the emails. Does your email client block protonmail? I will try again (both the API token and this attachment) with a gmail address.

My bad. It was in my gmail spam bucket for some reason.

EDIT: SMACKS FOREHEAD. I found it. It’s a stupid output/presentation bug in the data view. v4.0.9beta out shortly.

Ah, that’ll do it. I also sent an email with my read only API token on Jan 5th, can you check if that got caught too? If not, I’ll resend that email.

I was wondering if this might’ve been the issue. Glad it’s been found. As above, I can still send you my API key for testing purposes if you can’t find the original email.

Yup. Got it. Thanks and sorry for the confusion.

It’s actually a bit more involved to fix than I thought at first. It also highlights an interesting but common issue with software testing.

I was testing with a low-level unit test. My app retrieves a bunch of raw reviews and translates those into processed reviews with a duration. Then it uses MAD to find sessions. Finally, rather than caching the sessions with all the underlying reviews, it creates an array of sessionSummaries that gets cached in localstorage.

The sessionSummary objects have a start-time, end-time, and counts for the total number of questions and incorrect answers. Guess how I was calculating the session duration? :angry:

The solution is to add a duration to each sessionSummary that’s the sum of the underlying review durations instead of just end-start.

The testing principle is always to test at the highest level possible. I was testing at the layer that creates the sessions, but not the layer that creates the sessionSummaries nor the actual presentation layer (which is really what you want your tests to focus on).

Anyway, I’ll have 4.0.9beta out before the end of the day.

Thanks for the catch!

1 Like

Just pushed v4.0.9beta. This is now the release candidate.

Changes since v4.0.8beta:

  • Fixes (hopefully for good) and adds tests for the bug @Lupo_Mikti found when sessions contain only a single review.

  • Display finer granularity of session durations. It’s still shown in minutes but adds two digits past the decimal so extremely short sessions show something meaningful.

  • More styling with the settings form.

This should be it. If nothing major is found over the weekend, I’ll move it over to production Monday. (Help text improvements can be pushed after v4.0.9 goes live for real.)

2 Likes