[Userscript] The GanbarOmeter

Please do compile any suggestions. It’s (mostly) just a markdown file. If you’re familiar with git/github you could even clone a copy, edit, and make a pull request: https://github.com/wrex/ganbarometer-svelte/blob/v4.0/src/components/Help.svx

Alternately, just cut and paste the markdown bits into an editor and send it to me (rw at pobox.com) and I can make the diff.

I despair if “distilled” is considered too difficult for English speakers learning another language, though. Just wait until they get to filial piety.

Hmm. This is odd. The fix should be included. Are you certain you’re running 4.0.8? (Obviously you are or you wouldn’t see the Help text.) Does it still show 0s no matter how many days of reviews you retrieve?

I think I still have an API key for you so I’ll look into it. Can you cut and paste all the session info and send it to me in the interim?

It only appeared that way. A few releases ago, I added a VERSION string to the gbSettings variable stored in localstorage. Now, whenever I make an incompatible change to the object structure of anything stored locally I update the version. This doesn’t happen every release.

When I first put in the version check, I just reset to defaults if there was a mismatch. This release copies over what I can if the version changed (and just adds anything new).


Re: sessions with single review not getting their duration set to the median.

I’m mystified. I even created a test for this specific case and everything. It’s passing:

Here’s the test itself:

it("sets the duration of single review sessions to the median", async () => {
    // median(infinity, 1000, 2000, 3000, 4000, 5000, 30000) === 3500
    mockReviewCollection([
      mockReview({ reviewData: { created_at: "2019-10-01T00:00:00.000Z" } }), // session with single review
      mockReview({ reviewData: { created_at: "2019-10-04T00:00:00.000Z" } }), // 2s duration
      mockReview({ reviewData: { created_at: "2019-10-04T00:00:02.000Z" } }), // 1s duration
      mockReview({ reviewData: { created_at: "2019-10-04T00:00:03.000Z" } }), // 5s duration
      mockReview({ reviewData: { created_at: "2019-10-04T00:00:08.000Z" } }), // 4s duration
      mockReview({ reviewData: { created_at: "2019-10-04T00:00:12.000Z" } }), // 3s duration
      mockReview({ reviewData: { created_at: "2019-10-04T00:00:15.000Z" } }), // unknown (30s)
    ]);
    const reviews = await getReviews(7);
    const sessions = parseSessions(reviews);
    expect(sessions.length).toBe(2);
    expect(sessions[0].reviews.length).toBe(1);
    expect(sessions[0].reviews[0].duration).toBe(3500);
  });

It first mocks WKOF to return seven Reviews (a single session of one review, then another session with six reviews).

Then it checks that there are two sessions, that the first first session has a single review, and that the duration of the first session is set to the median value.

Here’s the specific section code that implements the logic (and I’ve verified it’s present in v4.0.8beta):

EDIT: whoops copy/pasted the wrong code. Previous one only set the final review retrieved. Here’s the code that does it for the final review within each session:

export const parseSessions = (reviews: Review[]): Session[] => {

  // ... other logic

  // Force the duration of the last review in each session to the overall median
  sessionSlices.forEach((sess) => {
    sess.reviews[sess.reviews.length - 1].duration = median_duration;
  });
};

Clearly you’re retrieving more than a single session, so I’m mystified. Let me write another test where it’s the second to last session that has a single review (matching your case exactly).

I can’t imagine how it would affect things, but you mentioned you had to keep changing your settings. Could you tell me what changes you’ve made from the defaults?

1 Like

Correct, all of the previous days are also showing zeroes. I will send the copy/paste of the session info in an email to you.

Ganbarometer: 100 - 180; labels changed to 遅い、良い、速い; rad12 = 1.05, kan12 = 3, voc12 = 1.5, app34 = 0.75, guru = 0.9, master = 0.25, enl = 0.01; no quizzes selected

Speed: past 7 days; 12 - 20 qpm; 40 - 120 rpd

Appearance: dark theme selected with changes to Track, Text, Fill, hlTrack colors

1 Like

“All of the previous days”? I don’t follow.

A value of 0.0 spq (infinity qpm) indicates it thinks the total duration of a session was 0 seconds. You’ve retrieved 338 reviews. The header was cut off in your screen shot, so I’m unsure how many sessions it found, but the most recent session with 10 reviews had reasonable spq/qpm numbers. The session before that with a single review had a duration of zero, though.

How many sessions did it find? And how many of them show 0.0 spq? This is very weird, it’s behaving exactly like you didn’t get the fix I showed above (but I can’t figure out how that could be).

I’ve not received the email yet. I don’t think I ever requested an API key from you, either, please send one if you don’t mind (you can delete/disable it as soon as I debug this problem).

Oh that’s very strange if you haven’t been getting the emails. Does your email client block protonmail? I will try again (both the API token and this attachment) with a gmail address.

My bad. It was in my gmail spam bucket for some reason.

EDIT: SMACKS FOREHEAD. I found it. It’s a stupid output/presentation bug in the data view. v4.0.9beta out shortly.

Ah, that’ll do it. I also sent an email with my read only API token on Jan 5th, can you check if that got caught too? If not, I’ll resend that email.

I was wondering if this might’ve been the issue. Glad it’s been found. As above, I can still send you my API key for testing purposes if you can’t find the original email.

Yup. Got it. Thanks and sorry for the confusion.

It’s actually a bit more involved to fix than I thought at first. It also highlights an interesting but common issue with software testing.

I was testing with a low-level unit test. My app retrieves a bunch of raw reviews and translates those into processed reviews with a duration. Then it uses MAD to find sessions. Finally, rather than caching the sessions with all the underlying reviews, it creates an array of sessionSummaries that gets cached in localstorage.

The sessionSummary objects have a start-time, end-time, and counts for the total number of questions and incorrect answers. Guess how I was calculating the session duration? :angry:

The solution is to add a duration to each sessionSummary that’s the sum of the underlying review durations instead of just end-start.

The testing principle is always to test at the highest level possible. I was testing at the layer that creates the sessions, but not the layer that creates the sessionSummaries nor the actual presentation layer (which is really what you want your tests to focus on).

Anyway, I’ll have 4.0.9beta out before the end of the day.

Thanks for the catch!

1 Like

Just pushed v4.0.9beta. This is now the release candidate.

Changes since v4.0.8beta:

  • Fixes (hopefully for good) and adds tests for the bug @LupoMikti found when sessions contain only a single review.

  • Display finer granularity of session durations. It’s still shown in minutes but adds two digits past the decimal so extremely short sessions show something meaningful.

  • More styling with the settings form.

This should be it. If nothing major is found over the weekend, I’ll move it over to production Monday. (Help text improvements can be pushed after v4.0.9 goes live for real.)

2 Likes

It’s working! I should have seen this coming but it has immediately affected my overall spq taking it from 3.6 to 4.1

It never occurred to me my median would be around 1.5 seconds slower than how fast I usually answer a single item review, but that’s just how it is. Thanks for getting that fixed! I’d much rather have them counted this way than not at all.

Yeah, you’re right that it makes sense. Calculating session duration as first start time to last start time ignores the final review duration.

For me, it’s a nit. Most of my sessions are 80-200 reviews. If you have shorter sessions, or lots of singletons like yourself it has a bigger effect.

What’s funny is I went to the effort of estimating the final durations, even writing tests, then never actually used those values for anything until you reported the problem!

1 Like

I’m in the process of publishing v4.0.9 to production. For some reason that is completely mystifying me at the moment, it doesn’t appear to be loading the CSS.

Bear with me as I figure out why it’s broken.

1 Like

Was just about to mention the lack of CSS - good luck fixing :smiley:

v4.0.9 production is now live and working.

The only important change from the beta version is that the help file now includes the name and version of the script just below the table of contents.

Top post has been updated. I’ll also be deleting the old, non-svelte version from greasyfork later this afternoon. The old 3.X version of the script has been deleted. Please upgraded to v4.0.9.

Sorry for the goat rodeo getting this thing published!

3 Likes

Yep, all working now. The help section has a great amount of detail and examples and the tooltips should be very useful for new users.
Thanks for the update!

1 Like

Thanks for everyone’s help and suggestions. This is immensely more valuable to me now because of it.

It’s surprisingly helpful to keep an eye both on the GanbarOmeter and where the expected-daily-reviews line sits within my desired range.

One change it’s created in my behavior: I’m now self-studying everything in stages 1 and 2 before starting my real reviews (including radicals and vocabulary, not just kanji). I’m pretty sure this is going to improve my retention and not just let me go faster.

I should probably have made the defaults include all three items in the self-study quiz. Before I wrote this thing, I was using the item inspector to launch the self-study quiz and I’d just look over the vocab/radicals but only quiz on the kanji. I only checked kanji in the settings because that was my unconscious habit from doing it that way. It actually makes more sense to me to quiz on everything (I need more frequent reviews of anything in stages 1 or 2).

The defaults work pretty well for me otherwise.

4 Likes

I just updated to the new svelte version and it’s SO AWESOME - thanks so much! One problem though - when I click on self study it says that there are no questions found. Do I have to have a specifically named preset?

1 Like

Thank you! That makes me happy to hear. :grin:

I think it’s because you have no kanji in stages 1 or 2. If you click over to the data view it will show you how many radicals, kanji, and vocabulary you have in the early apprentice stages (1 and 2).

The default settings only quiz on early stage kanji. You may want to click all three checkboxes so you are also quizzed on radicals and vocabulary.

Enjoy!

Thanks, that was exactly the issue.

1 Like

Hi all,

Sorry to bother you. I’m using this script in Chrome. Seemingly without changing any settings, the scripts all of a sudden looks like this:


Have I messed something up?
Thank you

1 Like

It looks like you’re using an old version of the script. What version number does it say in tampermonkey (if that’s what you’re using) and can you update it?

1 Like