[Userscript]: Anime Context Sentences

You probably still have the outdated version of Advanced Context Sentence installed on your work computer.

1 Like

Thank you! That’s probably it!

Noticed a kanji being displayed as the han variant. Could you wrap the sentences in <span lang="ja"></span>? That worked for another script with the same issue. Or is that some misconfiguration on my end somewhere?

Anyway, thanks for making this - one of the coolest additions to Wanikani!

Seems fine with my config.

Which browser are you using?

1 Like

psdcon, I cannot thank you enough for taking the time to make this. It has really helped me remember vocab words better, and see how they’re used in actual conversations. 本当にありがとうございました!

Love using this script when learning new Kanji. And all of the Ghibli movies are currently available in Japanese on HBO so this is really fun. Thank you!

I’ve been loving this script! However, I do have one issue.

WK’s CONTEXT/Patterns of use seems to break the functionality of Anime Sentences during lessons. Specifically, Anime Sentences just doesn’t appear. I’ve seen this a few times now, but most recently on WaniKani / Vocabulary / 急

The vocab page comes up just fine, so I’m guessing it’s related to how WK uses tabs for lesson info, and the Context tab is now in the position of the Example Sentences tab.

1 Like

It appears that ImmersionKit, despite being Japanese-specific, does not prioritize Japanese fonts, so it uses Chinese glyph instead (on my Android).

The solution is roughly this, although you can use <html lang="ja"> as well.

This should be fixed now in version 1.3 of “WaniKani Item Info Injector” which “Anime Context Sentences” uses to inject the anime sentences section. You can either wait until @psdcon updates “Anime Context Sentences” to require version 1.3, or you can install “WaniKani Item Info Injector” directly to immediately use the newest version.


Thanks for the support!

1 Like

Yeah, but can’t really see that meme screenshot.

Real Keikaku


This is golden! Thank you so much for this script, stumbling upon it literally made my day! I’m so happy! Not only do we get to have more relatable example sentences now, but now I can just go to Immersion kit and download an anki card with the example sentence from there! This is just perfect!

The only way to make it even better is to add the links to the shown example sentences on Immersion kit so that one can download the anki cards from there right away without having to search for them first.

But in any case, thank you so so much for this script!

1 Like

I love this script so much and have been using it without problems for a long time. Today it is suddenly not loading the scripts and just says “Loading…” I’ve tried reinstalling but it doesn’t change. It seems to have coincided after I denied Tampermonkey access to using my cookies etc. ? Any support with this would be so appreciated!

1 Like

Oops, I messed up the sorting statement in the search after some optimizations today.

Should be working fine now.

1 Like

Thank you so much!! This script makes learning vocab so much more interesting!

There appears to be Youglish as well, but I am not sure if there is an API that can be used in UserScript? Also, this is not anime nor movie, but real speech (though, perhaps monologue).

Also unlike ImmersionKit,

  • Sound files cannot be extracted, to use in Anki for example.
  • Rewinding is easy, and audio is continuous, not chopped to segments.

I found this from this Discord, btw.

1 Like

Yeah, I’m aware of Youglish.

The problem with YouTube is just the lack of good subtitled Japanese content on YouTube. Very often you end up with personal vlogs and v-tuber videos that could be mildly interesting, but lack good context (interesting visuals / characters) to reinforce memory.

Visual novels and drama share the same problem, but to a lesser extent. Anime just has this combination of dramatic scenes, poppy colors, and exaggerated intonation that helps you remember the word or the phrase.

Speaking of updating the userscript…I’ve added a ton more anime (and drama) to the API but since this userscript filters by whitelists, all the new content is filtered. Maybe @psdcon can update it when they’re back.


Not to be nitpicky, but it’s about the original website.

  • Sound files aren’t really continuous, and some middle thought-to-be-silent segments are missing.

I know it is impossible to audit every segments to check if vocabularies exist, or in the correct form, but there are some instances Youglish do better.

This is from Homophone Dictionary, but Jisho only has 会う (listing 遭う as an alternative form).

(I have also just noticed the thing about <title> or og:title meta tags, but well, Jisho failed on that too.)

Another important case is, not all Japanese vocabularies have Kanji, nor Kana at all.

Again, somehow, Youglish got this right.

Of course, there are cases that both fail. I can’t think of a good example right now, but the vocab are broken first with MeCab, I guess.

Somehow, Youglish wins again.

Also, not all vocabularies have Jisho entries, in particular, phrases. Some manual labor might be needed to fix this.

I also think of using community manual labor, or perhaps only mine, to fix this, even if partially.

1 Like

Just a quick reply, exact searches are what you want to do for such cases.


It’s noted in the search section. I know most people won’t read it, and I’ve been thinking of adding an exact search toggle. That will make it more obvious.

I would refrain from tagging searches “winning” or “losing”, as it isn’t the case of not returning results but more the case of what is shown and whether that is what the user /learner wants.

From a learner’s perspective, it’s fine to return inflections, for example 泣く could return 泣いた、泣かない. But how about なく? What should that return? I don’t recall anyone saying there is a problem with jisho.org when they provide 20 entries for なく, or other forms of あう when searching for 遭う, but some find that issue with immersion kit.

On a side note, Sudachi is used, not MeCab.

I agree there are some cases where a different parser or a different search algorithm would make sense, and in fact I have added quite a number of hard mappings on top of the Sudachi parser. I guess if you were to sit someone down for full time summer job and go through the 10k most common vocab parsing that would be helpful to the site.


^ Just raising an example to point out problems with different parsing.

Well, I already open sourced the early data on Github and Jo Mako has all the data on his public spreadsheet, so you’re welcome to patch the data.


It’s more of Kanji choice, actually. Since this is a Kanji learning site, it can matter.

Also, I didn’t really test the UserScript, so I can’t tell if it would fail, but I don’t think 「」 is a part of the script.

I would consider making PR. It’s in /resources/*/*/data.json’s word_base_list key, perhaps. I also notice that I don’t really directly need to use the API. Just look at the sound key, and adding your static base URL is enough.

That being said, it is also possible that PR is accepted or not, but I can still make my search engine.