[Userscript] Media Context Sentences (formerly Anime Context Sentences 2)

:warning: This is a third-party script/app and is not created by the WaniKani team. By using this, you understand that it can stop working at any time or be discontinued indefinitely.

So, my changes to the script were getting to be rather significant, and the thread for original author’s version was getting hard to follow because the author hasn’t updated it in quite some time. I had been updating a modified version in that thread, but it was getting lost in the mix. Therefore, I’ve decided to make a different thread and upload my changed version to greasyfork. I also changed the name of this to match the fact that it can show more than just anime sentences now.

tl;dr for the “what this does” (if you haven’t checked out the original version):
Adds a section for vocabulary items that includes a list of context sentences from other sources (primarily anime) along with an audio snippet of the dialogue.


Available via Greasy Fork at WaniKani Media Context Sentences


The changes I made compared to the original script include:

  • Reimplementing a settings icon (:gear:) and speaker icon (:speaker:)[1]
    • Moved the speaker icon to after the title of the source to take up less space
    • Made the speaker icon change to :loud_sound: for the active playing selection
  • Stopping of the currently playing audio when clicking again on the selection
    • When selecting a different entry, will also stop the previous audio beforehand
  • Clicking on elements set to toggle hidden state “On Click” no longer triggers the audio to play
  • Coloring of the text of the vocabulary word in the example sentence
  • Adding at least 10 additional anime entries, and fixing a few of the existing entries’ names to match the results
  • New search categories, including “dramas”, “games”, “literature”, and “news”
    • All entries in all categories are enabled by default
  • Settings:
    • Redesigned the settings with separate pages for the different sections
    • General:
      • Real-time settings previews for all of the settings on this page
      • Option to resize the height of the container box, including the unit (default: “px”)
      • Option to set a limit for how many examples are displayed
      • Option to adjust audio playback volume (Default: 100%)
      • Options to configure retrying to fetch the API when no results are found
      • Option to have it show up for kanji items as well
        • For example, to preview how it’s used in vocab
        • If no results show up, checking or unchecking the “Exact Search” setting and saving often provides the desired results
    • Sorting:
      • Optional secondary sorting
      • Additional sorting methods:
        • Title of source
          • That is, it ignores the categories and sorts all the titles alphabetically
        • Position of keyword in sentence
          • When using this as a secondary sorting method, it’s more noticeable when not using a primary sorting method based on sentence length
    • Filter:
      • “Exact Match” option
        • Filters the results even further and affects the highlighting
      • “Exact Search” option
        • Affects the results obtained from Immersion Kit by quoting the search term
      • New anime entries and additional categories as mentioned above
  • Updating the referenced version of WaniKani Item Info Injector
  • Moved the content to the bottom of the Context section instead of being a separate section

Version Changelog (diffs available via Greasy Fork history if desired)
  • 1.1.4: Initial version with changes to fix audio, add speaker icon, and add gear icon.
  • 1.2.0: Modified version with an updated anime list, more settings for filtering, and coloring of the keyword within the example text.
  • 1.2.1: Made a bit of a better system for coloring the keywords (previously it would fail when using furigana in various scenarios).
  • 1.2.2: Greatly improved the system for coloring the keywords (will now successfully color the keywords in sentences that have random spaces in the middle of a keyword, and it will now properly match longer vocab with mixed okurigana).
  • 1.2.3: Fixed small oversight in the segment parser for the colorizer in 1.1.3.4.
  • 1.2.4: No longer needlessly queries the API when saving of the settings results in no changes necessary to re-query the API (also similarly won’t recreate the list if no changes have been made). Made a bunch of other code simplifications/optimizations.
  • 1.2.5: Added new search categories [dramas, games, literature, news] (all disabled by default) and added many entries for them (news and literature are shown with Japanese titles in the settings; hint: ctrl+a works to select all once a single entry has been selected). Added Mob Psycho 100 to the anime list (despite it being in the “Drama” category on the site for some reason). Changed the order of the selection boxes and resized a few of them. Fixed onclick for the speaker icon. Now allows changing the Example Limit to 0, which makes it return as many results as possible. Improved some array lookups by using hashmaps instead. Fixed and improved the checks for whether the filter lists have been updated or not and fixed some issues that were present when recreating the list; changing the filters now updates the list extremely fast. Most things involving the settings are being handled in a more graceful manner now.
  • 1.2.6: Added a setting to configure the max height of the container box. Moved “Only Yesterday” to Ghibli films as was mentioned earlier in the thread. Changed the scriptId referenced by WaniKani Open Framework to be unique to this version of the script (will consequently “reset” settings to default).
  • 1.2.7: Added regex.lastIndex=0 before each regex.test(string) usage to prevent inconsistent behavior when using the global regex flag in Javascript; this fixes an issue that would occasionally prevent a keyword from getting identified to be colored.
  • 1.2.8: Updated to latest version of WaniKani Info Injector. Updated the matching logic for the colorizer to work in most or all circumstances (in particular, should now also work when Exact Search is unchecked and a match is found with the kana/kanji equivalent of the word). Added more description to the hover text for Exact Search. Now does no sorting on the original query, so results will be cached unsorted and therefore adjusting the settings will give the properly sorted value every time. No longer needlessly recreates the list when switching the settings for Example Limit/Show Japanese/Show Furigana/Show English, instead relying on CSS modifications to handle it. Added a ton of more code optimizations.
  • 1.2.9: Fixed the CSS selector for when Example Limit is set to 0. Reverted to the previous method of audio element configuration, which was better able to handle clicking on different elements while one is already playing. Removed some unused functions.
  • 1.2.10: CSS selector now works as described when changing the Example Limit to 0. Fixed some async/await usage incongruencies.
  • 1.2.11: Added a range slider setting to adjust the audio playback volume and converted the playback speed number box into a slider as well. Also made it so that changing these values does not need to re-render the html. (Minimum acceptable playback speed is now expanded to be down to 10%).
  • 1.2.12: Updated the number for playback speed to match the description.
  • 1.2.13: Can now change the following settings and preview their effect in real-time (clicking cancel or otherwise closing the window without saving will revert the changes): Box Height, Example Limit, Playback Speed, Playback Volume, Show Japanese, Show Furigana, Show English; if the input is a text box, you must click outside of the input box in order to see the changes. Fixed a few other issues related to modifying those settings.
  • 1.3.0: Moved the sorting order to its own separate section in the settings. Made it clearer that all of the settings in General can be previewed in real-time. Inlined some simple multi-line code blocks, removed some useless code statements and commented-out code. Organized the functions a little bit.
  • 1.3.1: Readded the descriptive text that shows when no results were found or when they were all filtered out by the user’s settings.
  • 2.0.0: Redesigned the settings into separate pages. Added a secondary sorting method. Introduced a delayed retry for fetching the results when none were found (since the API can be a bit finicky sometimes). Organized a large amount of the code.
  • 2.0.1: Moved Mob Psycho 100 back to the drama list (just found out that it’s referring to the live action). Sorry for how this will affect the settings for any anime you’ve disabled that comes alphabetically after that. If I redesign the settings, I’ll make it so additions and removals can be less destructive in nature. Also, fixed some of the userscript documentation.
  • 2.0.2: Added configurable settings for the retry count and retry delay. Updated the functions used in onSettingsClosed to be more consistent. Grouped sections of the settings to be easier to understand their functions and added some more descriptions to them.
  • 2.0.3: Fixed a CSS styling oversight that could sometimes result in the span element being wider than the parent div and therefore causing a horizontal scrollbar to appear and take up space.
  • 2.0.4: Fixed the function call for creating the onclick listener for the English text element when first creating the elements.
  • 2.0.5: Moved the prior CSS fix to the proper selector.
  • 2.0.6: Added flex:auto to the img selector so that even when no image is found, it will pad the area to create a consistent-looking table.
  • 2.0.7: Better fix for the padding when some images are nonexistent. Also sorted some of the CSS rules and added additional class selectors when applicable.
  • 2.1.0: Fixed some possible issues with exampleLimit not being immediately recognized as a number. Generalized the CSS selectors using a variable name defined at the start. Added some fixes for edge-case scenarios that could still cause the horizontal slider to appear. Made a few other small optimizations.
  • 2.1.1: Made a number of small optimizations (mostly with the CSS selectors as well as string parsing).
  • 2.2.0: Moved all of the content to be a subsection at the bottom of the Context section instead of being its own separate section. Added an option to sort by position in sentence (but it doesn’t function properly as of this update).
  • 2.2.1: Implemented the position-of-keyword sort method that was accidentally prematurely introduced in the previous update.
  • 2.3.1: Completely Rewrote the Furigana class to be more catered to the needs of the script, resulting in better ability to highlight the matching keyword(s) within the sentences. Greatly improved the clarity of the way the CSS classlists are being updated. Added an element that includes the text without furigana, which condenses unnecessary padding and spaces within and around the text when showFurigana is set to ‘never’. Fixed some small oversights from the original script. Modified some usages to be more consistent with the esversion style.
  • 2.3.2: Small fix for setting the default playback rate to properly be 100%.
  • 2.4.1: Complete rewrite of the way the filters are done, which, along with many of the other optimizations added, ultimately makes the results return faster in certain scenarios. Updated the names of some of the filter items to match the updated names from Immersion Kit; note that this means that most users will need to double check that their filter selections match their desired preferences, since for these lists: modification = new item. Fixed the behavior of the fetchRetryCount setting to work more as expected (i.e., modifying this now allows one to essentially force a retry by increasing the count allowed). Added some code comments to the settings variables.
  • 2.4.2: Added a fix for migrating old settings to the new design.
  • 3.0.0: Changed name to “WaniKani Media Context Sentences”. Updated Item Info Injector dependency version. Added an Exact Match filter setting (I think I added it in this version). Added an ability to have it show up on kanji items as well (e.g., to preview how it’s used in vocab). If no results show up, checking or unchecking the “Exact Search” filter setting and saving often provides the desired results. Changed the box height setting to allow for custom unit measurements (defaults to “px” if none provided). Probably a lot more changes, but I’ve had them locally for so long I can’t remember what I did.
  • 3.0.1: Updated namespace as well, since changing the name also seems to make it so reinstallation is required.
  • 3.0.2: Fixed invalid (old versions) names for two literature entries. (Kanji items only) Made it automatically fallback to using Exact Search when no results are found. Removed Apiv2 lookup (still loading the module to ensure wkof.user is defined). Made the immersion kit text into a link. Sorted various functions. Added turbo:load handling for adding the settings menu to the wkof settings list.

  1. I think the forum renders these icons differently… ↩︎

12 Likes

Thanks for your work on this! Do you have any recommendations for one or two of the drama/news sources? I don’t think I’ve heard of any of them before…

1 Like

Honestly, I am not familiar with them either, but I noticed them when looking into the results from Immersion Kit’s website, so I figured it would be remiss of me to not include them. There are comparatively vastly fewer entries in the database that are non-anime, so you’ll probably be fine enabling as many as you want. If you end up with too many you don’t want, it’s pretty simple to disable them as you go.
I will say, at least though, that the few examples I’ve seen from Legal High have been pretty intense, which could easily be a good thing or a bad thing, depending on your goal.

1 Like

Thank you VERY much for this! I find the context sentences to be super useful :grin:

1 Like

Awesome script, has been very helpful for recognition outside of WK. Is there any chance of adding a reveal functionality for reviews? It would be great to see a random sentence either during the reviews or after answering. Similar to these scripts:

I’ll have to give it some thought, but I’m not convinced yet that it should be a feature here.

Are you basically just asking for an expanded version of the Simple Show Context Sentence script that pulls from a larger pool of context sentences?
The “after answering” part is already accounted for by the nature of the script unless I’m misunderstanding somehow.

Thank you for your consideration.

There are two similar use cases that I have in mind:

  1. Showing the sentence both before and after answering:
  • This could serve as a hint beforehand and then reinforce the context of the word after answering. It would essentially be an expanded version of the Simple Show Context Sentence script but pulling from a more extensive pool of sentences.

  • The main concern with the Simple Show Context Sentence script is that the limited pool of sentences could lead to memorizing the sentence and negatively impact learning. Having a larger pool of sentences from various sources like your script provides would likely significantly mitigate that issue whilst still providing contextual benefits.

  1. Showing the sentence only after answering:
  • This approach would add context to the words without providing hints before answering. This would align closely with the current functionality of your script but would streamline the process by displaying a random sentence immediately after the review, rather than requiring navigation through Item Info > Context > Anime Sentences.

  • This way, users can immediately see a relevant example sentence without adding too much time to their review process. Since there would be no hint before answering, it should avoid potential downsides for users.

The primary benefit of integrating this feature is that it enhances the utility of context sentences during reviews without significantly increasing review time. Given the large volume of daily reviews, manually checking context sentences can be time-consuming. Automating this would provide immediate contextual reinforcement, crucial for long-term retention, without adding much to the overall review duration.

If you consider implementing either use case as an optional feature (defaulting to off), it could be very beneficial for users who want more context in their review sessions but struggle with time constraints. I’m not sure how much work it would take to implement this, and it would likely extend the script’s scope, but I believe it would add substantial value with relatively simple ongoing maintenance.

Thanks again for your work and for your consideration!

Just to give you an answer for now, I won’t be available to work on this for at least the present week since I’ll be out of town. However, I’ll give it proper consideration once I’m back.

1 Like

Okay, sorry for the extra-long delay.

This is a strong point, and I feel myself in agreeance with it.
Nevertheless, I have to play devil’s advocate here on comparing the results from Immersion Kit and therefore on the viability of using this script to gather some random context sentences:
A lot of the time, the sentences found are correct matches/usages for the word (whether using the “Exact Search” setting or not[1]).

However, a good chunk of the time, they are not quite up to par. At best (of the worst), you’ll get a sentence that’s just the word itself and no other context[2]. At worst, you’ll get a sentence where it’s actually a different word because of the context[3].

I feel like I’m starting to see an XY problem here. Correct me if I’m wrong, but is the main takeaway you’d like to have an ability to benefit from a generally nonrepeating context sentence that doesn’t significantly hamper the workflow of a review session? It’s rather unfortunate that Simple Show Context Sentence always returns the first context sentence, regardless of how many other sentences WaniKani may have available.
Adjusting that script to manually to pick one at random wouldn’t be too hard, though I’m guessing you would be still unsatisfied pulling from only the list WaniKani provides?

In any case, here's all you would have to change to accomplish that.

Find the line with:

sentence = item.data.context_sentences[0]?.ja || '';

and replace it with:

const context_sentences = item.data.context_sentences || [];
sentence = context_sentences[Math.floor(Math.random()*context_sentences.length)]?.ja || '';

(Or you could just use item.data.context_sentences for both context_sentences references and keep it as a single line, but I prefer to store the variable when working with external data, in case the implementation changes and the context_sentences property becomes a computed property, for example. Something something overoptimization.)


I’m not completely dismissing this as a possibility. Though I would prefer to avoid duplicating the functionality of another working script, particularly while the results that might be pulled from this cannot be verified for accuracy.
I think of these results as more of a quantity over quality thing, where if you know what you’re looking for, it can be very helpful.
That’s not to throw shade on the developers of Immersion Kit. They’re doing everyone a big service by hosting it and providing an API to access it, not to mention the work that went into building it.
Anyway, I think I’m getting a bit tired and am starting to ramble, so I’ll stop here for now. Let me know if you have any other thoughts on it or if I missed something important.


  1. In many cases, unchecking Exact Search yields better results in terms of proper usage, but unfortunately, it also provides/allows kana-only renditions of the word, which could end up showing you the reading before you’ve had a chance to answer it yourself. ↩︎

  2. Sure, there are somewhat easy ways one might create a workaround for that. ↩︎

  3. For example: for the word 嫌悪(けんお), you’ll get sentences with 機嫌悪(きげんわる)い (primarily when Exact Search is enabled). ↩︎

Just wanted to mention for anyone using this that I noticed some names get changed in the results, so after updating to 2.4.0+, you may need to check whether your filter lists are set as desired, since I had to modify a number of them (this should now be handled properly by the migration logic added in 2.4.2).

Essentially, other than removing the : in Re:Zero, it looks like a bunch of the ones in the literature category had their author names in the titles, and those have been removed. So, I’ve updated the lists accordingly.

In light of that, this was a bit of a rushed update, and so I ended up pushing a bunch of other modifications I had been working on. If anyone notices anything else break because of that, please let me know!

Edit: I’ve noticed a big oversight with migrating from the previous settings layout, so actually, it’s probably better to wait to update until I’ve made a fix for that (will be version 2.4.2).

Finally released an update for this that I’d been holding onto locally for a long time. I also updated the name and namespace, which means you’ll almost certainly have to install the script again. That is, it probably won’t let you update over top of the old one.

Steps (order doesn’t matter):

3 Likes

It let me just update it like any other update with Tampermonkey :slight_smile: Thanks for working on the script as always!

2 Likes

No sentences are showing for any words for me rn. I think something’s wrong with the script

I have no problems on my end.

  1. What browser are you using?
  2. Does it show any results when you search for the same thing on https://www.immersionkit.com?

Hello hello. Sorry for the super-mega-extra-long delay as well. And apologies in advance for the ramble ahead.

As it turns out, I decided to move away from WK so I could do stuff like this in Anki myself. I had downloaded quite a few sentence banks (Tatoeba/Immersion Kit/Jalup/etc.) and was adding some filters and stuff to it (prioritising sentences that aren’t too long or too short, only including them if they have the correct reading, etc.) and made some progress. At some point whilst working on it though, I found that the general vocab vs sentence (on the front) cards thing had a pretty large argument for vocab cards for the purpose of speed. Moreover, I realised that the results that others have been getting from focusing on immersion and quick reviews were sufficient for me to not need to experiment with stuff too much (otherwise, I’d keep working on this, as I think SRS is extremely efficient).

This was my experience as well. What worked for me was ensuring that the kanji readings correspond, as well as some other things.

100%, that was my aim, and the small number of examples from WK was the reason it was a problem.

I agree completely regarding the quantity over quality. And, for sure, if you’re looking to confirm whether a word is used how you expect or similar, the tool is great for that. However, I think a similar tool could play a larger role in learning words. In fact, I don’t really see the low quality as too much of a problem. The way I see it, you want to do the reviews as quickly as possible. When you find a card that you don’t know, it can help to use a sentence as a hint. But having the same sentence is just way too large a hint and makes you tie your knowledge to that specific context. So, having completely random sentences alleviates that problem. Even if it just takes you seeing the particle に to realise how a certain verb is being used, I think that’s fine, because you get the same hint in reality. Moreover, it helps you develop a better understanding that just the definition provided. If, sometimes, the example is bad, it’s not too big of a problem, as over time you’ll get the correct understanding.

What stumped me was handling different readings. Previously, I had tried to do a core deck, but the problem I had was always words like める and める or and ひら. I think, in most cases, the context helps you discern which reading to use, and that reading-context hint is really just the perfect level to understand the differences between the meanings of these words as well. Moreover, often, we understand words because of collocations, and if the collocations are always there, there’s no reason to not rely on it during reviews. If you could sample random sentence from the true population of all sentences that have ever existed, I can’t see any argument against their use. Anyway, I will definitely revisit this project when I have more time. Please let me know if you’d be interested in working on something like this.

1 Like

Hi! Sorry but have you experienced any “laggy-ness” on the browser when using this script? I observed that the media context sentences appear when both reading and meaning are correct (not sure if this is the expected behavior).

I will try tomorrow on a faster laptop as this current laptop is quite old. I’m also using firefox and here are the scripts I’ve currently enabled for my own use:

That is the expected behavior. Though, an item is cached after it is first seen, so any subsequent visits shouldn’t require much computational time.

That being said, if there’s one thing I could potentially optimize, it would be the example limit. The way it currently works, it doesn’t affect how many are retrieved from Immersion Kit, only how many are set to be displayed in the box. Unfortunately, changing how it works would have significant repercussions, by that I mean, changing the example limit would no longer be a quick operation and would instead require completely regenerating the list.
The above paragraph doesn’t mean I won’t consider this, but just that it needs time to cook.

I have a very powerful PC, so there’s gonna be no easy way for me to experiment on performance.

Though…that screenshot shows that you have this script disabled, so unless that was just you doing some testing, you might be looking in the wrong area.

1 Like

Ah, then having a laptop with lower specs like mine should just bear with it the first time, and if I review the same items correctly in the future, it shouldn’t lag too much anymore?

Ah yes. That’s just me showing the scripts I currently have (and I did disable it for that specific topic), so I thought there might be some scripts that are conflicting with yours, hence the lag. Which I think has been explained by:

Thank you for your assistance! I really like this script. But when pressed for time and I needed to do reviews fast, I just turn it off temporarily :sweat:

Ah, well I might have somewhat misled you with my statement. It’s only cached in RAM (i.e., an array), so as soon as the page is unloaded or refreshed, it’ll need to refetch them—though, conversely, with the new Turbo loading system, as long as you don’t refresh the page, theoretically, they should all remain fetched.
Anyway, furthermore, if it’s the creation of the elements on the page that is causing your lag, then I’ll need to rethink my strategy. Hence why I’m not immediately sure.

1 Like

Some scripts only work if the page is refreshed, though, or more aptly stated, some scripts may need to have the page refreshed for it to work. With turbo’s behavior, I could technically keep 2 tabs (one for dashboard, view another one for reviews) and I wouldn’t need to refresh the reviews page. unless a script demands a page refresh orz

Well, that and this 9 year old laptop’s combination. If I have your scripts on, there’s a browser warning where it says “This page is slowing firefox” or something. Or could be some scripts I have rn. I would need to test this on my better (but not with great specs either) laptop later.