Nah, I do type in, I just type things really fast, and avoid spending too much time on an item
I hate to tell you, but that entire gauge may end up sleeping with the fishes. Iāll still calculate and display speeds, but probably only as a numeric value/badge/legend rather than a full blown gauge.
I donāt think the information gleaned is sufficiently actionable. It doesnāt warrant a third of the display. (Especially if itās telling you youāre answering most questions in about a second.)
It may become an assignment difficulty gauge of some sort.
I was so happy when WK introduced the extra study stuff. It indicates that they also agree that āextraā reviews of early stage stuff is extremely worthwhile. I think this is especially true for anyone that, like me, only does one review session per day. Itās really hard to remember stuff from lessons you performed over 24 hours ago! More iterations for new stuff makes intuitive sense.
Iām an engineer and a bit of a data geek. The old engineering adage is that āif you donāt measure it, you canāt improve it.ā And there is no data as fascinating to me as my own!
Anyway, I now have quantifiable data to show that these self study sessions launched from the Ganbarometer are paying off.
I started doing self-study reviews of things in stages 1-2 every morning before my ārealā reviews in January or early February IIRC. I iteratively repeat the self-study reviews until I get 100% of the new items correct. Obviously, these extra reviews make it MUCH easier to answer correctly during my formal review session since I just reviewed those a few minutes ago (though I clearly still miss 5-10% regularly).
Because I only do these out-of-band reviews for items in stages 1-2, I feel strongly that itās purely beneficial to my learning (I still use the self-study userscript rather than WKās extra study, but itās basically the same thing). IMO, itās not cheating at all. It just means that instead of re-reviewing these items at 4h and 8h intervals, I get one āreviewā the day before (the lesson itself) then several in a row each morning before the formal session. I still get the normal re-quizzing after multi-day intervals for everything else.
I noticed today that this shows up pretty clearly in my own stats (from the wonderful workload graph userscript):
That little bump in overall āApprenticeā level items (the pink line) seems pretty significant to me.
Itās even more pronounced when you look at all four Apprentice stages broken out individually:
My accuracy for what I assume is Apprentice 1 stuff jumped by more than 10%!
I think the smaller jump from ~79% accuracy to ~85% accuracy in January, 2021 was due to me starting to consciously review items that hadnāt yet guruād: just looking at items in the progress section of the dashboard that didnāt have all five boxes ticked beneath them, and shift-clicking to bring up their pages in a separate tab if I didnāt remember them.
My accuracy for enlightened items (light blue line) also appears to be improving slightly. Itās too early to tell if this trend will continue, but if it does Iām not sure what to attribute it to. It could be that increasing my accuracy for early stage items also paid dividends with later stage stuff. Higher accuracy and less mental friction from misses seems to help me focus and better disambiguate similar characters that Iād been confusing previously.
Anyway, this is pretty wonky analysis that probably wonāt interest anyone else, but I find my own data endlessly fascinating. Iām completely convinced that more frequent āextraā study of recently introduced items has already paid significant dividends.
[I wasnāt sure where to put this, but since even after the extra study section was added to the dashboard I still use the Ganbarometer to launch self-study of items in stages 1 and 2, I thought this was as good a place as any.]
In the other thread I read that you are slacking off a little bit on script development, and of course I cannot allow this to happen So hereās a bug report for you (although the bug is so cute I should maybe not report it so I can enjoy it a bit longer):
Maybe youād like to fix the Z-order for your gauge hands?
Just joking above of course! This is super-non-urgent so please take your time slacking off
I was thinking āShe must be running an old version of the script, I distinctly remember catching and fixing that bug.ā
Then I realized Iām running a dev version of the script, not the published version. Sigh.
Since Iāve already coded the fix, Iāve even less excuse for not publishing it!
4.0.10 published!
Itās only been 3 months or so since I fixed this bug. Whatās the rush?
Works now, thanks!
I think I might have a bug in retrieving reviews. Before spending too much effort tracking it down, though, I wanted to see if anyone else is seeing issues with this script.
If you are having issues, please describe the behavior you observe. Thanks!
Iām a new user (as of today) but I think iām having issues. my ganbarometer is at 0 and my spedometer is maxed at 13.9. the reviews bar graph seems like it has realistic data in it. I just finished a level though and havenāt ramped up on lessons yet. My weighted count in data is 91 for a target minimum of 130. Should the graph look like this?
Looks like itās working correctly. What is the target range set to in your settings (icon in the upper right)?
It should also be shown in parentheses on the data side.
14 questions per minute is a pretty fast clip. If thatās normal for you, you can also change the target answer speed in the settings (default is 7-10 questions per minute).
Seems like itās only broken for me (I think Iāve got an error handling bug, but itās low priority for me to chase down unless others are also experiencing problems).
Im not currently experiencing any problems. I did turn off script compatibility mode (when this was still a setting), all seems to work before and after though.
For other users, if the meter is tilted completely to the left side, that doesnāt mean āzeroā, just that you are far from the target range. It is no shame to adjust it (in fact mine is basically half of the default settings as I aim for a slower WK progress to pursue grammar and video lessons on the side too). The default settings are quite ambitious (more in line with the 1-2 weeks per level that power users aim for).
Small feature request if you have some time at any point - it would be great to have a setting to offset the day start time for the āreviewsā graph. Sometimes I do reviews at 12-2am that Iād like to count in the āpreviousā dayās reviews!
That makes sense. Havenāt looked at the code in ages (which is a good sign).
Seems like a reasonably self-contained request with minimal impact on other things.
Famous last wordsā¦
Well, Iāve at least opened up and looked at the code for the first time in 9 monthsā¦
I think the only thing I need to change is this one function (as well as storing another setting, and adding the form element to the settings UI):
export const inSameDay = (x: Date, ref: Date): boolean => {
return (
x.getDate() === ref.getDate() &&
x.getMonth() === ref.getMonth() &&
x.getFullYear() === ref.getFullYear()
);
};
I think all I need to do is add an offset to the reference date (ref
). I think Iāll just add a setting for an offset of +/- n hours (minute granularity or finer seems unnecessary).
The only gotcha I can think of is that storing another setting requires a revision bump to the settings. I vaguely remember adding code to convert peopleās settings theyāve already stored on a revision bump, but I havenāt looked at that code yet.
Iāll probably publish a dev version sometime in the next few days before pushing it out for everyone just in case I screw something up.
Could we increase the selectable range of the qpm?
Or maybe it is answer on flamming durtles that makes the speed wrong (Iām fast tho).
Iām going to go out an a limb and guess that this is an artifact of how flaming durtles creates review records.
Extremely fast typists can average around 100 words per minute, but not if they are having to recall anything.
I donāt think increasing the range would actually tell you anything useful.
Sorry, but this one doesnāt seem worth fixing.
Iāve looked the data, flamming durtle reports either 0.6 or 1.2 second per question.
Nothing to fix there indeed.
The universal constant is still in effect: Nothing is ever easy.
For some reason, I canāt seem to run npm run dev
to run my locally edited script on the dashboard any longer.
Iām not sure, but I think this might be because Tampermonkey added āsubresource integrity checksā (which is probably a good thing). If my guess is right I need to calculate sha-1 hashes for the script and css files that Iām using with @require :file://...
and @resource css file://...
.
If I canāt figure this out shortly, Iāll just publish a dev version on tampermonkey, but live updates are really convenient.
So after messing with this for several hours, I realize itās just more headache than itās worth to me to make any further changes to this as a dashboard userscript.
Apologies, but with recent and imminent browser and tampermonkey changes, I just canāt justify spending any more of my free time fighting with the script. I kinda pushed the boundaries of whatās possible with a user script (svelte, jest/TDD, live server with tampermonkey, etc.) and am now in dependency hell.
It will be significantly easier for me to publish any further features in a standalone app rather than as a script that runs on the dashboard. (Not that that will be a walk in the park, either, but it will be significantly easier.)
Iām sorry to report that the current 4.10 version will be the end of the line as a dashboard script. New features (like shifting day boundaries) will only become available (eventually) in a standalone app (kinda like wkstats and its ilk).
Moving it to a full-blown app will allow me to:
-
Add more involved (and hopefully interesting) features. After 3 years with WK, I have even more wild ideasā¦
-
Make significant performance improvements (better caching, lazy-loading, prefetch on hover, etc.)
-
Create a staging site as well as the production site (letting me get better/faster feedback on desired changes).
Alas, this means that the new features wonāt appear on the dashboard, youāll have to point your browser to a different (non-Wanikani) URL in another tab or whatever.
How big a deal is this to everyone?
- I will continue to only use v4.10 as a dashboard script (and not use an app)
- I would switch entirely to a v.Next app (if it had more/better features)
- I would use both v4.10 on the dashboard and v.Next as an app
0 voters
Welp, this oneās dead with the recent API changes Dx