There are people who systematically skip vocab lessons and end up with hundreds or possibly thousands of pending lessons. Do you do anything to stop the URL from becoming too long in these cases?
I am not sure I understand the question entirely. The queue I auto generate is based on the users lesson order preferences and their lesson batch size preferences (the default size is 5). There is nothing to stop the queue from being manipulated (for now). Please just use it responsibly.
We wont register lessons that you havenât unlocked, so if you happen to include a subject id that isnât ready for learning, the subsequent quiz registration for that subject is effectively a no-op.
You think you could add a method to get the current queue? I am inclined to use it, but since I work primarily against WKOF data in Omega I think it would be the easiest if I could get the IDs, do my processing against WKOF, then call wkQueue.applyManipulation()
to replace the queue
edit:
Just noticing that you support WKOF already, and I probably want to use addTotalChange
? Although is it intended that addTotalChange
calls applyManipulation
rather than the private addTotalChange
method?
edit2:
Also is there any reason why you didnât publish it as a library script?
If you want I can add a method to get the current queue, but I am not sure what the advantage of that would be. The callback function (in the following example the function applyPreset()
) gets passed the current queue, and you can access wkof items if you add openFramework: true
in the options object:
wkQueue.addTotalChange(applyPreset, {openFramework: true});
async function applyPreset(currentQueue) {
// code to reorder and/or filter currentQueue
// you can access the wkof item of each queue element like this:
// currentQueue[0].item
...
return result;
}
If you need more endpoints loaded into the wkof items, you can pass a configuration object with the openFrameworkGetItemsConfig
property â this will be forwarded to wkofâs get_items():
wkQueue.addTotalChange(applyPreset, {openFramework: true, openFrameworkGetItemsConfig: {wk_items: {options: {assignments: true}}}});
function applyPreset(currentQueue) {
return currentQueue.sort((i0, i1) => i0.item.assignments.srs_stage - i1.item.assignments.srs_stage);
}
Sorry that there is no documentation yet â I hope the intended use becomes a bit clearer.
addTotalChange, addFilter and addReorder are not really handled differently, I just divided the possible use cases in three groups which should in my opinion be ideally applied in this order: First addTotalChange which might do anything it wants to the queue (maybe even add new items), then addFilter which might remove items or change their order, and finally addReorder which should keep all items and only switch their order. So in case there is a userscript that adds items in addTotalChange(), a second userscript that only orders items alphabetically (or whatever) in addReorder() is guaranteed to also affect the items added by the first script.
In the interface you have addTotalChange
mapped to applyManipulation
, shouldnât it be mapped to the internal addTotalChange
?
Yes, thatâs right. It was already 3am yesterday
I did the same thing with Item Info Injector â it can be @require
d in any userscript, but optionally it can also be installed manually to be guaranteed to always use the latest script version, even if the script requiring it is not requiring the newest version.
I actually did not test yesterday if adding items really works, but it seems to go without problems if Iâm in an extra study session:
wkQueue.addTotalChange(q => [1, ...q]);
This adds the item with id 1 to the front of the queue. During reviews, WK gets confused when trying to submit that I correctly answered the ground radical (which I have already burned) and shows the âconnection lostâ screen.
One more thing I should mention: Queue Manipulator makes sure to call the callback function whenever needed, which might happen more than once. That also means that the intended use is to register the callback(s) when the script is loaded, without the need to check yourself what page is currently displayed.
Example: Go to a page that already uses Turbo, register a manipulation, for example
wkQueue.addFilter(q => q.filter(i => i.subject.subject_category === `Kanji`), {subject: true});
and then click on the âReviewsâ button in the page header: Queue Manipulator applies the manipulation immediately on review start.
Hmm. Perhaps not all of the information (I hope)?
It occurs to me that, as it stands, the somewhat confusing âitem accuracyâ statistic (vs. question accuracy) also disappears from the preview UI.
Iâm usually against removing info that was previously available, but in this case, I think itâs a good thing. Two different percentages were a little confusing. Either does a fine job of answering âhow difficult did I find this session/subjectâ but the values cannot be compared to each other. âWhich accuracy stat do you meanâ is an annoying and unnecessary question.
Since the SRS buckets on the dashboard and âYour Progressâ sections on individual items already show you how subjects are progressing, I only see a downside to resurrecting the âitem accuracyâ stat.
Please let me know if Iâm wrong about any of this.
My definitions:
Item accuracy
The percentage of subjects answered correctly on the first attempt for both meaning and reading components during a review session.
This is/was displayed on the summary pages. It tends to be a significantly smaller value than question accuracy.
Itâs possible but painful to calculate this via the API (by parsing every individual review
record and not just review_statistics
). Regardless, I think user scripts should probably emphasize question accuracy (or at least make it very clear if/when they report something different).
Question accuracy
The percentage of review questions answered correctly, whether reading or meaning, and independent of how many times each question was asked.
A running calculation for this value during a review session is displayed in the upper right of the review screen (next to the thumbs-up icon). That is, the percentage of subjects that progressed to a higher SRS stage during a review session.
Itâs also available for each subject via the API in the percentage_correct
attribute in the review_statistics
structure.
Writing accuracy
The percentage of times the reading for an individual subject was answered correctly.
This is calculable for any given subject via the API (the reading_correct
attribute in the review_statistics
structure divided by reading_correct + reading_incorrect
, multiplied by 100).
Meaning accuracy
The percentage of times the meaning for an individual subject was answered correctly.
This is calculable for any given subject via the API (the meaning_correct
attribute in the review_statistics
structure divided by meaning_correct + meaning_incorrect
, multiplied by 100).
One more thought re: removing the summary pages:
I wonder if it will take new users longer to discover and understand the âwrap upâ button without the summary pages.
It makes sense that completing the last available review dumps you back to the dashboard, but I wonder if new users will click the âhomeâ icon without realizing they are abandoning up to ten partially answered subjects. Seems like an âAre you sure?â modal might be warranted.
Iâd expect a number of users to do a few reviews, navigate back to the dashboard, then do a few more reviews and wonder why they are seeing some of the same items again. People have restarted incomplete sessions forever as a brute-force way to recover from typos or whatever, but I wonder if people will unintentionally improve their stats.
FWIW, it feels to me like the âhomeâ icon on the review screen should throw up a modal with a message like: âYou still have n incompletely answered items â are you sure?â with buttons for âreturn to dashboardâ and âwrap-up incomplete itemsâ.
Gonna list out some of my personal complaints so at least they can be read instead of just annoying me.
When closing a panel in the item info section while reviewing, itâs a bit disorienting, when it violently snaps you up, because the space just got much smaller. Maybe it would be worth giving that section some minimum height, or (though I know this is easier said than done) making the closing animated?
This looks somewhat weird also, and it obstructs a ton of stuff, when hovering over:
I like the idea a lot, but maybe it could shift the content to the left, so that if you want to mouse over and look at the individual parts, you can?
Also, in terms of the item info section. Maybe the primary meaning/reading could be made a bit more visible? Right now looking at this, there are things that jump out to me first that are arguably less important than that. Iâd assume most of the time when someone opens this panel, what they are looking for is the meaning/reading they messed up.
Yes, Iâd also really appreciate this info being more prominent. Just bold weight and larger font would help a ton from a UX standpoint.
Still playing with it, but I think I am going to need a ârefreshâ method for when the preset is changed and I need to re-run active manipulations. I could of course remove and re-apply the same manipulation, but I think having a refresh makes sense. Just mapping ârefreshâ to applyManipulations()
(with no arguments) seems to work well
First Iâd like to thank you for being so helpful in this thread. Itâs making this transition a lot easier for us.
Second I am wondering if there would be any way for us to alter the item answer stats? I erase answer stats for items for a particular setting of my âback to backâ feature so that you have to redo previously correctly answered questions. Totally understand if there is not, Iâll just sunset that feature
edit:
Third, is there any way for us to change the preferred voice actor (in a session)? Being able to randomize the voice actor is really valuable
Just want to say I agree with what @Kumirei says above about the voice actor thing - I find having it randomized for each audio is essential (to avoid be getting too used to hearing words pronounced always exactly the same) and if it canât be an option built into WaniKani then it would be great if you could at least make sure scripts can still do it!
I have added wkQueue.refresh()
to version 0.3 and fixed the mapping of addTotalChange
. Furthermore, finished subjects are now removed from originalQueue
â otherwise they would get reintroduced when manipulating the queue again halfway through the reviews. Oops
Iâm wondering how I should approach lesson reordering. I probably have to use Open Framework to determine the list of subjects ready for learning, because this list is not embedded in the sent html.
I hope this can stay in the new reviews I love the true back to back option!
Hey, is applyManipulation
intended to ignore the present state of the queue? I was thinking it would just manipulate the current queue, but it looks like it passes in the original queue
(context: trying to figure out how to implement back to back)
No, that was not my intention. I will fix it right away.
This is kind of stupid on my part, but I was splicing the queue passed to the callback and ended up modifying the original queue Just letting you know in case you want to add a guard against that, but Iâm not sure thatâs necessary
Version 0.4 should fix the two reported issues.
EDIT: âŠor not. It seems that structuredClone
is not doing what I expected it to do, and now everything is broken