This is a third-party script and is not created by the WaniKani team. By using this, you understand that it can stop working at any time or be discontinued indefinitely.
The original Advanced Context Sentence userscript by @abdullahalt is not maintained anymore, so I decided to take over maintenance.
This script enhances the example sentences provided by WaniKani with the following features:
Recognizable Kanji Highlighting
Highlights the kanji you already learned and should be able to recognize. This will encourage you to read the context sentence just to try and remember all the kanji you learned in it.
Forgot the kanji? Just click on it and you will be taken to the kanji’s item page to review it again.
Sentence Audio
Adds an audio button that uses voice synthesis to read the Japanese sentence. By default, this uses the voice synthesizer provided by your webbrowser, but it can be changed in the settings to use the Google Translate voice synthesizer. Keep in mind that neither of the two options will deliver perfect results.[3]
Kanji Information
Hovering over kanji will reveal some information about the kanji like the WaniKani level, readings, and meanings. Additionally, using the Open Framework JLPT, Joyo, and Frequency filters, Advanced Context Sentence can show you JLPT level, Joyo level, and frequency of that kanji.
Note: you will only get information if the kanji is on Wanikani, if not then the link will take you to the kanji page on Jisho.org and you can get all the information you need from there.
Weblink to a Sentence Segmenter
Optionally adds a link to send the sentence to a webpage of your choice. By default, the sentence is sent to ichi.moe which will segment the sentence into phrases/words and show each word’s translation.
Without Open Framework installed, the recognizable kanji highlighting and the kanji information on hover will not work. It is also required to open the settings dialog. ↩︎
Without Open Framework JLPT, Joyo, and Frequency filters installed, the JLPT, Joyo, and frequency information will be missing from the kanji info popup. ↩︎
Warning: Do not rely on the synthetic sentence audio too much. At the time of writing this, Google Translate reads the example sentence “はやく中に入ろう。” for the vocabulary word 中 as “はやく中に入ろう。”, and my webbrowser even reads it as “はやく中に入ろう。”. ↩︎
Cool new feature with the link to ichi.moe. I’ve used it before when I’m reading books but it makes sense to have it attached to these context sentences.
Allow to select from the list of Japanese synthetic voices that the browser supports
Until now, when using the speech synthesis provided by the browser, the script would just tell the API that it wants to have a Japanese voice without further specification. Now, the user can choose from a list of all voices available in their browser.
The available options may vary between different browsers: On Windows 10, Edge offers me Ayumi, Haruka, Ichiro, and Sayaka as local voices (all with low quality results) and Nanami as online voice (very good quality). Chrome offers the same four local voices and Google 日本語 as online voice (also of disappointing quality). Firefox only finds Haruka and also does not add any online options.
Microsoft Nanami is the only voice available to me that actually manages to read はやく中に入ろう。 correctly.
If SSML is used, this value will be overridden by prosody tags in the markup.
Looking into SSML, it stands for “Speech Synthesis Markup Language”, and if I understand correctly, it would allow to use “furigana” to tell the speech synthesizer which reading to use. Sadly, SSML seems to not be supported by webbrowsers. But I stumbled upon a webpage that showcased another interesting feature: highlighting the word that is currently read. I thought this could be an interesting addition to “Advanced Context Sentence 2”.
This feature is not available for the Google Translate voice.
Gotta lookup SSML not only for Furigana, but also for pitch accent. (Still, it requires setting up a TTS server, and some subscription, which I already have Azure anyway.)
In w3c, there is <xhtml:rt role="alphabet:x-JEITA">, but not sure how well it works, nor can I find documentations, and which TTS support this one at all? (Google and Amazon appear to only have IPA and X-SAMPA.)
Where would you get Furigana data, anyway, other than handmade? Even Furigana inserter isn’t that accurate. It is of course possible by personal doing, but not sure about sharing / community.
Forgot to say about Web Speech API. It might require setting up? Adding Japanese keyboard in Windows, but much easier in Mac. For Linux it depends, but probably need espeak and some troubleshooting?
I did not say that I wanted to use SSML in “Advanced Context Sentence 2” – I was just looking into it out of general curiosity if it could potentially be used in any userscript. That said, it might also have been possible to use it in “Advanced Context Sentence 2”: For example with the context sentence “はやく中に入ろう。” for the vocab item 中, the only occurrence of 中 in the sentence would have to be read with the reading provided by WK. This could have been useful to prevent some speech synthesizers from reading it as ちゅう.
But anyway, as there is no browser support I didn’t look any further into it.
At least Google Chrome and Microsoft Edge (on Windows 10) seem to support Japanese online voices without any need for setup. I think Firefox does not and only finds one local Japanese voice; but even if the webbrowser does not provide any Japanese voices, “Advanced Context Sentence” still provides the Google Translate voice as an alternative.
It seems like the problem was that the script was setting the referrer policy to “no-referrer” for all network requests. I think the original author of the script did this because with the default referrer policy, the Google Translate voice doesn’t work. In version 2.9 I have changed the policy to “same-origin” and starting lessons seems to work again.
I haven’t used context sentences for 30 levels because they were impossible to read. I now have enough reading ability to read 1 in 4 sentences, so I came to the community to see whether there was a furigana add on. But I found this script. Thank you SO MUCH! Now maybe I can actually get something out of the context sentences!
I’m currently looking for a script to play audio for the common word combinations above the context sequences. So far, I did not find any. Are you planning to implement it in this script in the future?
I’ve been using this script for quite a while and it has been a huge help for me, but unfortunately it seems like the new WaniKani update may have broken some of its features. Since the update unlearned kanji have been coloured the same as learned kanji, and the tooltip that appeared while hovering over a kanji no longer appears. All of the context sentence settings are still set to default, with tooltips enabled.
I disabled all of my userscripts except for this one and the Open Framework to see if any scripts were conflicting, as well as using both Chrome and Firefox, but these issues still persist. I’m curious if anyone else is having similar issues, or if something is wrong on my end. : (
I already noticed that the tooltips are not working anymore, but I decided to prioritize fixing some of my other scripts first. I did not notice the problem with the coloring of unlearned kanji yet – when I get around to it, I will fix this as well.
What exactly do you mean with “have it work with the Anime Context Sentences script”? Have you run into a conflict between the scripts so that one of them doesn’t work correctly when both are enabled at the same time, or do you mean that you want the kanji in the sentences from that script also highlighted according to your knowledge?