Yikes! Not my intent at all. Breeze Dark is awesome.
I’m showing my ignorance. I didn’t even realize Breeze Dark HAD configuration options. That solves my first and only real issue completely.
I’ve got a brute force CSS workaround for the second item (self-study quiz). I just threw in these two CSS rules into my top level svelte component’s styling:
Beta version v.4.0.8beta is now available. Those that have installed a previous beta should get the update automagically when Tampermonkey notices the change (utilities → check for userscript updates to force an update).
This is the production release candidate.
Changes since the last beta:
Updated all the underlying npm packages to @latest
Fixed all the tests and wrote a few more. Then broke them all again by installing more svelte plugins (mdsvex and svelte-number-spinner). Sigh.
Sets the duration of the last review in each session to the median of all reviews retrieved
Changed units to “qpm”, “spq”, and “rpd” (questions-per-minute, seconds-per-question, and "reviews-per-day) throughout.
Added in situ help documentation: There’s now a Help menu, and the settings have little “info” icons to pull up in-context help.
Much theming. Should look reasonable everywhere in default or dark themes now. Looks quite nice with Breeze Dark.
Also added some CSS overrides for the self-study quiz so that correct/incorrect answers highlight properly.
First attempt at migrating settings between versions where possible. Moving from any 4.0.X version to any other 4.0.Y versions should preserve most user settings.
Since the GanbarOmeter value directly correlate to the number of items in different SRS stages, I’ve changed the labels to 少 and 多 (more sensible than slow/fast).
Changed to a Svelte NumberSpinner with three digits of precision for the different SRS weights. You can use the mouse or the keyboard to adjust the values. This was purely to keep @Kumirei off my back until I create the super-complicated version in 4.1.
This should have fixed every issue I’m currently aware of other than some infrastructure stuff (borked tests). Please let me know ASAP if you discover anything that needs fixing (minor or not).
I’d also appreciate any reviews of the Help text. It’s longer than it needs to be, but hopefully it explains things well enough.
I’ve been using the 4.0.8 dev version for several days now, and love it. The GanbarOmeter calculation and Expected Daily Reviews line in the bar chart have been working really great for me.
Buuuuuuuut, I’m already thinking about 4.1!
There’s really no rush to come out with anything better. But there is one remaining concern I’m still thinking about for 4.1:
If a user slacks off and doesn’t do all their reviews for a few days, nothing in the displays would nag at them, even though they’d be creating potentially exponential headaches for their future self.
Both the GanbarOmeter and EDR calculate their values from the SRS stage distribution of upcoming assignments — they don’t look at anything else. In particular, they don’t even look at when assignments are scheduled (only their current stage).
After going back and forth in my head several times, I’ve decided that both the GanbarOmeter and Reviews chart are absolutely fine as they are. No further tweaks are planned nor needed. (I’d understand if people are skeptical!)
I do, however, want to add some sort of warning if reviews start piling up for the next few days. I’ve not figured out exactly what I want to show, but maybe even a simple textual warning suffices.
After lots of pondering, I think what makes the most sense is to simply count how many reviews are scheduled today, tomorrow and the day after. If any one of those three numbers is greater than the target maximum reviews-per-day, display a warning. If the average of tomorrow and the-day-after is above the max, display a particularly stern warning.
It doesn’t make sense to go beyond the day after tomorrow as you won’t see any early stage items scheduled more than a couple days out.
Where would you like to see such a warning, and what should it look like? Currently, I’m just thinking of a text warning below the Speed gauge, but I’m open to ideas.
Probably would make the most sense in the Reviews section, but I don’t know how it could be accomplished without increasing the vertical space of the script… maybe as an aside to the right?
I’ll compile any suggestions I have for a future message. From what I can tell at a glance a lot of my ‘proofing’ suggestions will center around word choice as many of the words used are what I’d consider too advanced for the average user (speaking as someone whose job involves user acceptance testing and attempting to write helpful explanations of things in our internal system + training videos). For example, you have the phrase “These ‘widgets’ display distilled information” and that’s already going to lose people because of the word ‘distilled’.
It seems this still isn’t working for me as my single-review sessions still show 0/0 etc.
Turns out I end up with a fair amount of these kinds of sessions because when I miss an item in review it’s usually only 1 item haha
This worked just fine for me, although I have to say this usually worked on its own when tampermonkey would actually update the script instead of me having to manually reinstall from the URL. But it’s nice to have it more guaranteed now. I was getting pretty tired of needing to retype my settings over and over tbh.
Alternately, just cut and paste the markdown bits into an editor and send it to me (rw at pobox.com) and I can make the diff.
I despair if “distilled” is considered too difficult for English speakers learning another language, though. Just wait until they get to filial piety.
Hmm. This is odd. The fix should be included. Are you certain you’re running 4.0.8? (Obviously you are or you wouldn’t see the Help text.) Does it still show 0s no matter how many days of reviews you retrieve?
I think I still have an API key for you so I’ll look into it. Can you cut and paste all the session info and send it to me in the interim?
It only appeared that way. A few releases ago, I added a VERSION string to the gbSettings variable stored in localstorage. Now, whenever I make an incompatible change to the object structure of anything stored locally I update the version. This doesn’t happen every release.
When I first put in the version check, I just reset to defaults if there was a mismatch. This release copies over what I can if the version changed (and just adds anything new).
Re: sessions with single review not getting their duration set to the median.
I’m mystified. I even created a test for this specific case and everything. It’s passing:
It first mocks WKOF to return seven Reviews (a single session of one review, then another session with six reviews).
Then it checks that there are two sessions, that the first first session has a single review, and that the duration of the first session is set to the median value.
Here’s the specific section code that implements the logic (and I’ve verified it’s present in v4.0.8beta):
EDIT: whoops copy/pasted the wrong code. Previous one only set the final review retrieved. Here’s the code that does it for the final review within each session:
export const parseSessions = (reviews: Review[]): Session[] => {
// ... other logic
// Force the duration of the last review in each session to the overall median
sessionSlices.forEach((sess) => {
sess.reviews[sess.reviews.length - 1].duration = median_duration;
});
};
Clearly you’re retrieving more than a single session, so I’m mystified. Let me write another test where it’s the second to last session that has a single review (matching your case exactly).
I can’t imagine how it would affect things, but you mentioned you had to keep changing your settings. Could you tell me what changes you’ve made from the defaults?
A value of 0.0 spq (infinity qpm) indicates it thinks the total duration of a session was 0 seconds. You’ve retrieved 338 reviews. The header was cut off in your screen shot, so I’m unsure how many sessions it found, but the most recent session with 10 reviews had reasonable spq/qpm numbers. The session before that with a single review had a duration of zero, though.
How many sessions did it find? And how many of them show 0.0 spq? This is very weird, it’s behaving exactly like you didn’t get the fix I showed above (but I can’t figure out how that could be).
I’ve not received the email yet. I don’t think I ever requested an API key from you, either, please send one if you don’t mind (you can delete/disable it as soon as I debug this problem).
Oh that’s very strange if you haven’t been getting the emails. Does your email client block protonmail? I will try again (both the API token and this attachment) with a gmail address.
Ah, that’ll do it. I also sent an email with my read only API token on Jan 5th, can you check if that got caught too? If not, I’ll resend that email.
I was wondering if this might’ve been the issue. Glad it’s been found. As above, I can still send you my API key for testing purposes if you can’t find the original email.
It’s actually a bit more involved to fix than I thought at first. It also highlights an interesting but common issue with software testing.
I was testing with a low-level unit test. My app retrieves a bunch of raw reviews and translates those into processed reviews with a duration. Then it uses MAD to find sessions. Finally, rather than caching the sessions with all the underlying reviews, it creates an array of sessionSummaries that gets cached in localstorage.
The sessionSummary objects have a start-time, end-time, and counts for the total number of questions and incorrect answers. Guess how I was calculating the session duration?
The solution is to add a duration to each sessionSummary that’s the sum of the underlying review durations instead of just end-start.
The testing principle is always to test at the highest level possible. I was testing at the layer that creates the sessions, but not the layer that creates the sessionSummaries nor the actual presentation layer (which is really what you want your tests to focus on).
Anyway, I’ll have 4.0.9beta out before the end of the day.
Just pushed v4.0.9beta. This is now the release candidate.
Changes since v4.0.8beta:
Fixes (hopefully for good) and adds tests for the bug @Lupo_Mikti found when sessions contain only a single review.
Display finer granularity of session durations. It’s still shown in minutes but adds two digits past the decimal so extremely short sessions show something meaningful.
More styling with the settings form.
This should be it. If nothing major is found over the weekend, I’ll move it over to production Monday. (Help text improvements can be pushed after v4.0.9 goes live for real.)
It’s working! I should have seen this coming but it has immediately affected my overall spq taking it from 3.6 to 4.1
It never occurred to me my median would be around 1.5 seconds slower than how fast I usually answer a single item review, but that’s just how it is. Thanks for getting that fixed! I’d much rather have them counted this way than not at all.
Yeah, you’re right that it makes sense. Calculating session duration as first start time to last start time ignores the final review duration.
For me, it’s a nit. Most of my sessions are 80-200 reviews. If you have shorter sessions, or lots of singletons like yourself it has a bigger effect.
What’s funny is I went to the effort of estimating the final durations, even writing tests, then never actually used those values for anything until you reported the problem!
I’m in the process of publishing v4.0.9 to production. For some reason that is completely mystifying me at the moment, it doesn’t appear to be loading the CSS.