[Userscript] Advanced Context Sentence 2

This is a third-party script and is not created by the WaniKani team. By using this, you understand that it can stop working at any time or be discontinued indefinitely.

The original Advanced Context Sentence userscript by @abdullahalt is not maintained anymore, so I decided to take over maintenance.

Install Advanced Context Sentence 2
General script installation guide
Optional[1]: Open Framework installation
Optional[2]: Open Framework JLPT, Joyo, and Frequency filters
Do not forget to uninstall the old Advanced Context Sentence script if you have it installed.

(The following description is partially copied from the original script’s thread)

This script enhances the example sentences provided by WaniKani with the following features:

  • Recognizable Kanji Highlighting
    Highlights the kanji you already learned and should be able to recognize. This will encourage you to read the context sentence just to try and remember all the kanji you learned in it.
    Forgot the kanji? Just click on it and you will be taken to the kanji’s item page to review it again.

  • Sentence Audio
    Adds an audio button that uses voice synthesis to read the Japanese sentence. By default, this uses the voice synthesizer provided by your webbrowser, but it can be changed in the settings to use the Google Translate voice synthesizer. Keep in mind that neither of the two options will deliver perfect results.[3]

  • Kanji Information
    Hovering over kanji will reveal some information about the kanji like the WaniKani level, readings, and meanings. Additionally, using the Open Framework JLPT, Joyo, and Frequency filters, Advanced Context Sentence can show you JLPT level, Joyo level, and frequency of that kanji.
    Note: you will only get information if the kanji is on Wanikani, if not then the link will take you to the kanji page on Jisho.org and you can get all the information you need from there.

  • Weblink to a Sentence Segmenter
    Optionally adds a link :arrow_upper_right: to send the sentence to a webpage of your choice. By default, the sentence is sent to ichi.moe which will segment the sentence into phrases/words and show each word’s translation.

Advanced Context Sentence 2

  1. Without Open Framework installed, the recognizable kanji highlighting and the kanji information on hover will not work. It is also required to open the settings dialog. ↩︎

  2. Without Open Framework JLPT, Joyo, and Frequency filters installed, the JLPT, Joyo, and frequency information will be missing from the kanji info popup. ↩︎

  3. Warning: Do not rely on the synthetic sentence audio too much. At the time of writing this, Google Translate reads the example sentence “はやく(なか)(はい)ろう。” for the vocabulary word (なか) as “はやく(ちゅう)(はい)ろう。”, and my webbrowser even reads it as “はやく(ちゅう)(いり)ろう。”. ↩︎



I updated the entry in the scripts list to point to this thread

  1. This is a cool feature. Didn’t know we could do that ↩︎


Cool new feature with the link to ichi.moe. :+1:t3: I’ve used it before when I’m reading books but it makes sense to have it attached to these context sentences.


Thank you very much! The old script stopped working for me a while ago, now your update is back on. Nice touch with ichi.moe!

1 Like

Very useful, thank you for updating the script!

Do you think it would be possible to add Voicevox TTS to the script somehow? Here’s the link to its Github website with the engine/core.

They don’t offer an online API, it’s just a tool you can download – I don’t think it is possible to utilize this in a userscript.

1 Like

Version 2.1:

Allow to select from the list of Japanese synthetic voices that the browser supports

Until now, when using the speech synthesis provided by the browser, the script would just tell the API that it wants to have a Japanese voice without further specification. Now, the user can choose from a list of all voices available in their browser.

The available options may vary between different browsers: On Windows 10, Edge offers me Ayumi, Haruka, Ichiro, and Sayaka as local voices (all with low quality results) and Nanami as online voice (very good quality). Chrome offers the same four local voices and Google 日本語 as online voice (also of disappointing quality). Firefox only finds Haruka and also does not add any online options.

Microsoft Nanami is the only voice available to me that actually manages to read はやく(なか)(はい)ろう。 correctly.

Please let me know if something doesn’t work.
Link to previous script version for downgrading in case version 2.1 doesn’t work for you


Really good app, but the volume absolutely destroys my ears because it plays it as full blast compared to everything else on my PC.

Is there a way to have a slider for it?

1 Like

I have now added a volume slider to the settings menu :slight_smile:

Version 2.3:

Added a volume setting

A slider located in the settings menu. To use the settings menu, you need Open Framework installed.

New functionality: Highlight currently read sentence section

While searching for volume controls for the synthetic speech, I stumbled upon the sentence

If SSML is used, this value will be overridden by prosody tags in the markup.

Looking into SSML, it stands for “Speech Synthesis Markup Language”, and if I understand correctly, it would allow to use “furigana” to tell the speech synthesizer which reading to use. Sadly, SSML seems to not be supported by webbrowsers. But I stumbled upon a webpage that showcased another interesting feature: highlighting the word that is currently read. I thought this could be an interesting addition to “Advanced Context Sentence 2”.

This feature is not available for the Google Translate voice.

Please let me know if something doesn’t work.
Link to previous script version for downgrading in case version 2.3 doesn’t work for you

1 Like

Gotta lookup SSML not only for Furigana, but also for pitch accent. (Still, it requires setting up a TTS server, and some subscription, which I already have Azure anyway.)

It is possible and probably not too hard in Azure, but it requires knowing more than Kana reading or Furigana.

Character sapi ipa
合成 ゴ’ウセ goˈwɯseji
所有者 ショュ’ウ?ャ ɕjojɯˈwɯɕja
最適化 サィテキカ+ sajitecikaˌ

In w3c, there is <xhtml:rt role="alphabet:x-JEITA">, but not sure how well it works, nor can I find documentations, and which TTS support this one at all? (Google and Amazon appear to only have IPA and X-SAMPA.)

Where would you get Furigana data, anyway, other than handmade? Even Furigana inserter isn’t that accurate. It is of course possible by personal doing, but not sure about sharing / community.

Forgot to say about Web Speech API. It might require setting up? Adding Japanese keyboard in Windows, but much easier in Mac. For Linux it depends, but probably need espeak and some troubleshooting?

I did not say that I wanted to use SSML in “Advanced Context Sentence 2” – I was just looking into it out of general curiosity if it could potentially be used in any userscript. That said, it might also have been possible to use it in “Advanced Context Sentence 2”: For example with the context sentence “はやく(なか)(はい)ろう。” for the vocab item (なか), the only occurrence of 中 in the sentence would have to be read with the reading provided by WK. This could have been useful to prevent some speech synthesizers from reading it as ちゅう.

But anyway, as there is no browser support I didn’t look any further into it.

At least Google Chrome and Microsoft Edge (on Windows 10) seem to support Japanese online voices without any need for setup. I think Firefox does not and only finds one local Japanese voice; but even if the webbrowser does not provide any Japanese voices, “Advanced Context Sentence” still provides the Google Translate voice as an alternative.

1 Like


Seems like this userscript broke to me recently, because everytime I try to open my lessons, this appears (I already isolated it as the main culprit)

1 Like

It seems like the problem was that the script was setting the referrer policy to “no-referrer” for all network requests. I think the original author of the script did this because with the default referrer policy, the Google Translate voice doesn’t work. In version 2.9 I have changed the policy to “same-origin” and starting lessons seems to work again.


I haven’t used context sentences for 30 levels because they were impossible to read. I now have enough reading ability to read 1 in 4 sentences, so I came to the community to see whether there was a furigana add on. But I found this script. Thank you SO MUCH! Now maybe I can actually get something out of the context sentences!


@Sinyaven Firstly, thanks for the great script!

I’m currently looking for a script to play audio for the common word combinations above the context sequences. So far, I did not find any. Are you planning to implement it in this script in the future?

1 Like

Maybe I will add this as an optional feature. I will take a look when I have time.

1 Like

I’ve been using this script for quite a while and it has been a huge help for me, but unfortunately it seems like the new WaniKani update may have broken some of its features. Since the update unlearned kanji have been coloured the same as learned kanji, and the tooltip that appeared while hovering over a kanji no longer appears. All of the context sentence settings are still set to default, with tooltips enabled.

I disabled all of my userscripts except for this one and the Open Framework to see if any scripts were conflicting, as well as using both Chrome and Firefox, but these issues still persist. I’m curious if anyone else is having similar issues, or if something is wrong on my end. : (

1 Like

I already noticed that the tooltips are not working anymore, but I decided to prioritize fixing some of my other scripts first. I did not notice the problem with the coloring of unlearned kanji yet – when I get around to it, I will fix this as well.

Thanks for the bug report!


Would it be possible to have it work with:

That would make so wonderful, I know this might be a quite an endeavor so I willing to send some beers or thee your way.

Let me know what you think about.

What exactly do you mean with “have it work with the Anime Context Sentences script”? Have you run into a conflict between the scripts so that one of them doesn’t work correctly when both are enabled at the same time, or do you mean that you want the kanji in the sentences from that script also highlighted according to your knowledge?