Thereās plenty of room up top. Maybe a throbbing āWarningsā menu item that conditionally appears up top?
The beta is working out pretty great!
Iāll compile any suggestions I have for a future message. From what I can tell at a glance a lot of my āproofingā suggestions will center around word choice as many of the words used are what Iād consider too advanced for the average user (speaking as someone whose job involves user acceptance testing and attempting to write helpful explanations of things in our internal system + training videos). For example, you have the phrase āThese āwidgetsā display distilled informationā and thatās already going to lose people because of the word ādistilledā.
It seems this still isnāt working for me as my single-review sessions still show 0/0 etc.
Turns out I end up with a fair amount of these kinds of sessions because when I miss an item in review itās usually only 1 item haha
This worked just fine for me, although I have to say this usually worked on its own when tampermonkey would actually update the script instead of me having to manually reinstall from the URL. But itās nice to have it more guaranteed now. I was getting pretty tired of needing to retype my settings over and over tbh.
Please do compile any suggestions. Itās (mostly) just a markdown file. If youāre familiar with git/github you could even clone a copy, edit, and make a pull request: ganbarometer-svelte/Help.svx at v4.0 Ā· wrex/ganbarometer-svelte Ā· GitHub
Alternately, just cut and paste the markdown bits into an editor and send it to me (rw at pobox.com) and I can make the diff.
I despair if ādistilledā is considered too difficult for English speakers learning another language, though. Just wait until they get to filial piety.
Hmm. This is odd. The fix should be included. Are you certain youāre running 4.0.8? (Obviously you are or you wouldnāt see the Help text.) Does it still show 0s no matter how many days of reviews you retrieve?
I think I still have an API key for you so Iāll look into it. Can you cut and paste all the session info and send it to me in the interim?
It only appeared that way. A few releases ago, I added a VERSION string to the gbSettings variable stored in localstorage. Now, whenever I make an incompatible change to the object structure of anything stored locally I update the version. This doesnāt happen every release.
When I first put in the version check, I just reset to defaults if there was a mismatch. This release copies over what I can if the version changed (and just adds anything new).
Re: sessions with single review not getting their duration set to the median.
Iām mystified. I even created a test for this specific case and everything. Itās passing:
Hereās the test itself:
it("sets the duration of single review sessions to the median", async () => {
// median(infinity, 1000, 2000, 3000, 4000, 5000, 30000) === 3500
mockReviewCollection([
mockReview({ reviewData: { created_at: "2019-10-01T00:00:00.000Z" } }), // session with single review
mockReview({ reviewData: { created_at: "2019-10-04T00:00:00.000Z" } }), // 2s duration
mockReview({ reviewData: { created_at: "2019-10-04T00:00:02.000Z" } }), // 1s duration
mockReview({ reviewData: { created_at: "2019-10-04T00:00:03.000Z" } }), // 5s duration
mockReview({ reviewData: { created_at: "2019-10-04T00:00:08.000Z" } }), // 4s duration
mockReview({ reviewData: { created_at: "2019-10-04T00:00:12.000Z" } }), // 3s duration
mockReview({ reviewData: { created_at: "2019-10-04T00:00:15.000Z" } }), // unknown (30s)
]);
const reviews = await getReviews(7);
const sessions = parseSessions(reviews);
expect(sessions.length).toBe(2);
expect(sessions[0].reviews.length).toBe(1);
expect(sessions[0].reviews[0].duration).toBe(3500);
});
It first mocks WKOF to return seven Reviews (a single session of one review, then another session with six reviews).
Then it checks that there are two sessions, that the first first session has a single review, and that the duration of the first session is set to the median value.
Hereās the specific section code that implements the logic (and Iāve verified itās present in v4.0.8beta):
EDIT: whoops copy/pasted the wrong code. Previous one only set the final review retrieved. Hereās the code that does it for the final review within each session:
export const parseSessions = (reviews: Review[]): Session[] => {
// ... other logic
// Force the duration of the last review in each session to the overall median
sessionSlices.forEach((sess) => {
sess.reviews[sess.reviews.length - 1].duration = median_duration;
});
};
Clearly youāre retrieving more than a single session, so Iām mystified. Let me write another test where itās the second to last session that has a single review (matching your case exactly).
I canāt imagine how it would affect things, but you mentioned you had to keep changing your settings. Could you tell me what changes youāve made from the defaults?
Correct, all of the previous days are also showing zeroes. I will send the copy/paste of the session info in an email to you.
Ganbarometer: 100 - 180; labels changed to é ććčÆććéć; rad12 = 1.05, kan12 = 3, voc12 = 1.5, app34 = 0.75, guru = 0.9, master = 0.25, enl = 0.01; no quizzes selected
Speed: past 7 days; 12 - 20 qpm; 40 - 120 rpd
Appearance: dark theme selected with changes to Track, Text, Fill, hlTrack colors
āAll of the previous daysā? I donāt follow.
A value of 0.0 spq (infinity qpm) indicates it thinks the total duration of a session was 0 seconds. Youāve retrieved 338 reviews. The header was cut off in your screen shot, so Iām unsure how many sessions it found, but the most recent session with 10 reviews had reasonable spq/qpm numbers. The session before that with a single review had a duration of zero, though.
How many sessions did it find? And how many of them show 0.0 spq? This is very weird, itās behaving exactly like you didnāt get the fix I showed above (but I canāt figure out how that could be).
Iāve not received the email yet. I donāt think I ever requested an API key from you, either, please send one if you donāt mind (you can delete/disable it as soon as I debug this problem).
Oh thatās very strange if you havenāt been getting the emails. Does your email client block protonmail? I will try again (both the API token and this attachment) with a gmail address.
My bad. It was in my gmail spam bucket for some reason.
EDIT: SMACKS FOREHEAD. I found it. Itās a stupid output/presentation bug in the data view. v4.0.9beta out shortly.
Ah, thatāll do it. I also sent an email with my read only API token on Jan 5th, can you check if that got caught too? If not, Iāll resend that email.
I was wondering if this mightāve been the issue. Glad itās been found. As above, I can still send you my API key for testing purposes if you canāt find the original email.
Yup. Got it. Thanks and sorry for the confusion.
Itās actually a bit more involved to fix than I thought at first. It also highlights an interesting but common issue with software testing.
I was testing with a low-level unit test. My app retrieves a bunch of raw reviews and translates those into processed reviews with a duration. Then it uses MAD to find sessions. Finally, rather than caching the sessions with all the underlying reviews, it creates an array of sessionSummaries that gets cached in localstorage.
The sessionSummary objects have a start-time, end-time, and counts for the total number of questions and incorrect answers. Guess how I was calculating the session duration? 
The solution is to add a duration to each sessionSummary thatās the sum of the underlying review durations instead of just end-start.
The testing principle is always to test at the highest level possible. I was testing at the layer that creates the sessions, but not the layer that creates the sessionSummaries nor the actual presentation layer (which is really what you want your tests to focus on).
Anyway, Iāll have 4.0.9beta out before the end of the day.
Thanks for the catch!
Just pushed v4.0.9beta. This is now the release candidate.
Changes since v4.0.8beta:
-
Fixes (hopefully for good) and adds tests for the bug @Lupo_Mikti found when sessions contain only a single review.
-
Display finer granularity of session durations. Itās still shown in minutes but adds two digits past the decimal so extremely short sessions show something meaningful.
-
More styling with the settings form.
This should be it. If nothing major is found over the weekend, Iāll move it over to production Monday. (Help text improvements can be pushed after v4.0.9 goes live for real.)
Itās working! I should have seen this coming but it has immediately affected my overall spq taking it from 3.6 to 4.1
It never occurred to me my median would be around 1.5 seconds slower than how fast I usually answer a single item review, but thatās just how it is. Thanks for getting that fixed! Iād much rather have them counted this way than not at all.
Yeah, youāre right that it makes sense. Calculating session duration as first start time to last start time ignores the final review duration.
For me, itās a nit. Most of my sessions are 80-200 reviews. If you have shorter sessions, or lots of singletons like yourself it has a bigger effect.
Whatās funny is I went to the effort of estimating the final durations, even writing tests, then never actually used those values for anything until you reported the problem!
Iām in the process of publishing v4.0.9 to production. For some reason that is completely mystifying me at the moment, it doesnāt appear to be loading the CSS.
Bear with me as I figure out why itās broken.
Was just about to mention the lack of CSS - good luck fixing 
v4.0.9 production is now live and working.
The only important change from the beta version is that the help file now includes the name and version of the script just below the table of contents.
Top post has been updated. Iāll also be deleting the old, non-svelte version from greasyfork later this afternoon. The old 3.X version of the script has been deleted. Please upgraded to v4.0.9.
Sorry for the goat rodeo getting this thing published!
Yep, all working now. The help section has a great amount of detail and examples and the tooltips should be very useful for new users.
Thanks for the update!
Thanks for everyoneās help and suggestions. This is immensely more valuable to me now because of it.
Itās surprisingly helpful to keep an eye both on the GanbarOmeter and where the expected-daily-reviews line sits within my desired range.
One change itās created in my behavior: Iām now self-studying everything in stages 1 and 2 before starting my real reviews (including radicals and vocabulary, not just kanji). Iām pretty sure this is going to improve my retention and not just let me go faster.
I should probably have made the defaults include all three items in the self-study quiz. Before I wrote this thing, I was using the item inspector to launch the self-study quiz and Iād just look over the vocab/radicals but only quiz on the kanji. I only checked kanji in the settings because that was my unconscious habit from doing it that way. It actually makes more sense to me to quiz on everything (I need more frequent reviews of anything in stages 1 or 2).
The defaults work pretty well for me otherwise.
I just updated to the new svelte version and itās SO AWESOME - thanks so much! One problem though - when I click on self study it says that there are no questions found. Do I have to have a specifically named preset?
Thank you! That makes me happy to hear. 
I think itās because you have no kanji in stages 1 or 2. If you click over to the data view it will show you how many radicals, kanji, and vocabulary you have in the early apprentice stages (1 and 2).
The default settings only quiz on early stage kanji. You may want to click all three checkboxes so you are also quizzed on radicals and vocabulary.
Enjoy!
Thanks, that was exactly the issue.


