Updates to Lessons, Reviews, and Extra Study

There are people who systematically skip vocab lessons and end up with hundreds or possibly thousands of pending lessons. Do you do anything to stop the URL from becoming too long in these cases?

I am not sure I understand the question entirely. The queue I auto generate is based on the users lesson order preferences and their lesson batch size preferences (the default size is 5). There is nothing to stop the queue from being manipulated (for now). Please just use it responsibly.

We wont register lessons that you haven’t unlocked, so if you happen to include a subject id that isn’t ready for learning, the subsequent quiz registration for that subject is effectively a no-op.

1 Like

You think you could add a method to get the current queue? I am inclined to use it, but since I work primarily against WKOF data in Omega I think it would be the easiest if I could get the IDs, do my processing against WKOF, then call wkQueue.applyManipulation() to replace the queue

edit:

Just noticing that you support WKOF already, and I probably want to use addTotalChange? Although is it intended that addTotalChange calls applyManipulation rather than the private addTotalChange method?

edit2:

Also is there any reason why you didn’t publish it as a library script?

1 Like

If you want I can add a method to get the current queue, but I am not sure what the advantage of that would be. The callback function (in the following example the function applyPreset()) gets passed the current queue, and you can access wkof items if you add openFramework: true in the options object:

wkQueue.addTotalChange(applyPreset, {openFramework: true});

async function applyPreset(currentQueue) {
    // code to reorder and/or filter currentQueue
    // you can access the wkof item of each queue element like this:
    // currentQueue[0].item
    ...
    return result;
}

If you need more endpoints loaded into the wkof items, you can pass a configuration object with the openFrameworkGetItemsConfig property – this will be forwarded to wkof’s get_items():

wkQueue.addTotalChange(applyPreset, {openFramework: true, openFrameworkGetItemsConfig: {wk_items: {options: {assignments: true}}}});

function applyPreset(currentQueue) {
    return currentQueue.sort((i0, i1) => i0.item.assignments.srs_stage - i1.item.assignments.srs_stage);
}

Sorry that there is no documentation yet – I hope the intended use becomes a bit clearer.


addTotalChange, addFilter and addReorder are not really handled differently, I just divided the possible use cases in three groups which should in my opinion be ideally applied in this order: First addTotalChange which might do anything it wants to the queue (maybe even add new items), then addFilter which might remove items or change their order, and finally addReorder which should keep all items and only switch their order. So in case there is a userscript that adds items in addTotalChange(), a second userscript that only orders items alphabetically (or whatever) in addReorder() is guaranteed to also affect the items added by the first script.

1 Like

In the interface you have addTotalChange mapped to applyManipulation, shouldn’t it be mapped to the internal addTotalChange?

1 Like

Yes, that’s right. It was already 3am yesterday :sweat_smile:

2 Likes

I did the same thing with Item Info Injector – it can be @required in any userscript, but optionally it can also be installed manually to be guaranteed to always use the latest script version, even if the script requiring it is not requiring the newest version.


I actually did not test yesterday if adding items really works, but it seems to go without problems if I’m in an extra study session:

wkQueue.addTotalChange(q => [1, ...q]);

This adds the item with id 1 to the front of the queue. During reviews, WK gets confused when trying to submit that I correctly answered the ground radical (which I have already burned) and shows the “connection lost” screen.


One more thing I should mention: Queue Manipulator makes sure to call the callback function whenever needed, which might happen more than once. That also means that the intended use is to register the callback(s) when the script is loaded, without the need to check yourself what page is currently displayed.

Example: Go to a page that already uses Turbo, register a manipulation, for example

wkQueue.addFilter(q => q.filter(i => i.subject.subject_category === `Kanji`), {subject: true});

and then click on the “Reviews” button in the page header: Queue Manipulator applies the manipulation immediately on review start.

1 Like

Hmm. Perhaps not all of the information (I hope)?

It occurs to me that, as it stands, the somewhat confusing “item accuracy” statistic (vs. question accuracy) also disappears from the preview UI.

I’m usually against removing info that was previously available, but in this case, I think it’s a good thing. Two different percentages were a little confusing. Either does a fine job of answering “how difficult did I find this session/subject” but the values cannot be compared to each other. “Which accuracy stat do you mean” is an annoying and unnecessary question.

Since the SRS buckets on the dashboard and “Your Progress” sections on individual items already show you how subjects are progressing, I only see a downside to resurrecting the “item accuracy” stat.

Please let me know if I’m wrong about any of this.

My definitions:

Item accuracy

The percentage of subjects answered correctly on the first attempt for both meaning and reading components during a review session.

This is/was displayed on the summary pages. It tends to be a significantly smaller value than question accuracy.

It’s possible but painful to calculate this via the API (by parsing every individual review record and not just review_statistics). Regardless, I think user scripts should probably emphasize question accuracy (or at least make it very clear if/when they report something different).

Question accuracy

The percentage of review questions answered correctly, whether reading or meaning, and independent of how many times each question was asked.

A running calculation for this value during a review session is displayed in the upper right of the review screen (next to the thumbs-up icon). That is, the percentage of subjects that progressed to a higher SRS stage during a review session.

It’s also available for each subject via the API in the percentage_correct attribute in the review_statistics structure.

Writing accuracy

The percentage of times the reading for an individual subject was answered correctly.

This is calculable for any given subject via the API (the reading_correct attribute in the review_statistics structure divided by reading_correct + reading_incorrect, multiplied by 100).

Meaning accuracy

The percentage of times the meaning for an individual subject was answered correctly.

This is calculable for any given subject via the API (the meaning_correct attribute in the review_statistics structure divided by meaning_correct + meaning_incorrect, multiplied by 100).

One more thought re: removing the summary pages:

I wonder if it will take new users longer to discover and understand the “wrap up” button without the summary pages.

It makes sense that completing the last available review dumps you back to the dashboard, but I wonder if new users will click the “home” icon without realizing they are abandoning up to ten partially answered subjects. Seems like an “Are you sure?” modal might be warranted.

I’d expect a number of users to do a few reviews, navigate back to the dashboard, then do a few more reviews and wonder why they are seeing some of the same items again. People have restarted incomplete sessions forever as a brute-force way to recover from typos or whatever, but I wonder if people will unintentionally improve their stats.

FWIW, it feels to me like the “home” icon on the review screen should throw up a modal with a message like: “You still have n incompletely answered items – are you sure?” with buttons for “return to dashboard” and “wrap-up incomplete items”.

4 Likes

Gonna list out some of my personal complaints so at least they can be read instead of just annoying me.

When closing a panel in the item info section while reviewing, it’s a bit disorienting, when it violently snaps you up, because the space just got much smaller. Maybe it would be worth giving that section some minimum height, or (though I know this is easier said than done) making the closing animated?

This looks somewhat weird also, and it obstructs a ton of stuff, when hovering over:
image
I like the idea a lot, but maybe it could shift the content to the left, so that if you want to mouse over and look at the individual parts, you can?

Also, in terms of the item info section. Maybe the primary meaning/reading could be made a bit more visible? Right now looking at this, there are things that jump out to me first that are arguably less important than that. I’d assume most of the time when someone opens this panel, what they are looking for is the meaning/reading they messed up.

6 Likes

Yes, I’d also really appreciate this info being more prominent. Just bold weight and larger font would help a ton from a UX standpoint.

6 Likes

Still playing with it, but I think I am going to need a “refresh” method for when the preset is changed and I need to re-run active manipulations. I could of course remove and re-apply the same manipulation, but I think having a refresh makes sense. Just mapping “refresh” to applyManipulations() (with no arguments) seems to work well

1 Like

First I’d like to thank you for being so helpful in this thread. It’s making this transition a lot easier for us.

Second I am wondering if there would be any way for us to alter the item answer stats? I erase answer stats for items for a particular setting of my “back to back” feature so that you have to redo previously correctly answered questions. Totally understand if there is not, I’ll just sunset that feature

edit:

Third, is there any way for us to change the preferred voice actor (in a session)? Being able to randomize the voice actor is really valuable

8 Likes

Just want to say I agree with what @Kumirei says above about the voice actor thing - I find having it randomized for each audio is essential (to avoid be getting too used to hearing words pronounced always exactly the same) and if it can’t be an option built into WaniKani then it would be great if you could at least make sure scripts can still do it!

1 Like

I have added wkQueue.refresh() to version 0.3 and fixed the mapping of addTotalChange. Furthermore, finished subjects are now removed from originalQueue – otherwise they would get reintroduced when manipulating the queue again halfway through the reviews. Oops :sweat_smile:

I’m wondering how I should approach lesson reordering. I probably have to use Open Framework to determine the list of subjects ready for learning, because this list is not embedded in the sent html.

2 Likes

I hope this can stay in the new reviews :pray: I love the true back to back option!

3 Likes

Hey, is applyManipulation intended to ignore the present state of the queue? I was thinking it would just manipulate the current queue, but it looks like it passes in the original queue

(context: trying to figure out how to implement back to back)

2 Likes

No, that was not my intention. I will fix it right away.

2 Likes

This is kind of stupid on my part, but I was splicing the queue passed to the callback and ended up modifying the original queue sweat_smile Just letting you know in case you want to add a guard against that, but I’m not sure that’s necessary

1 Like

Version 0.4 should fix the two reported issues.

EDIT: 
or not. It seems that structuredClone is not doing what I expected it to do, and now everything is broken :sweat_smile:

1 Like