This script enhances the example sentences provided by WaniKani with the following features:
Recognizable Kanji Highlighting
Highlights the kanji you already learned and should be able to recognize. This will encourage you to read the context sentence just to try and remember all the kanji you learned in it.
Forgot the kanji? Just click on it and you will be taken to the kanji’s item page to review it again.
Sentence Audio
Adds an audio button that uses voice synthesis to read the Japanese sentence. By default, this uses the voice synthesizer provided by your webbrowser, but it can be changed in the settings to use the Google Translate voice synthesizer. Keep in mind that neither of the two options will deliver perfect results.[3]
Kanji Information
Hovering over kanji will reveal some information about the kanji like the WaniKani level, readings, and meanings. Additionally, using the Open Framework JLPT, Joyo, and Frequency filters, Advanced Context Sentence can show you JLPT level, Joyo level, and frequency of that kanji.
Note: you will only get information if the kanji is on Wanikani, if not then the link will take you to the kanji page on Jisho.org and you can get all the information you need from there.
Weblink to a Sentence Segmenter
Optionally adds a link to send the sentence to a webpage of your choice. By default, the sentence is sent to ichi.moe which will segment the sentence into phrases/words and show each word’s translation.
Without Open Framework installed, the recognizable kanji highlighting and the kanji information on hover will not work. It is also required to open the settings dialog. ↩︎
Without Open Framework JLPT, Joyo, and Frequency filters installed, the JLPT, Joyo, and frequency information will be missing from the kanji info popup. ↩︎
Warning: Do not rely on the synthetic sentence audio too much. At the time of writing this, Google Translate reads the example sentence “はやく中に入ろう。” for the vocabulary word 中 as “はやく中に入ろう。”, and my webbrowser even reads it as “はやく中に入ろう。”. ↩︎
Cool new feature with the link to ichi.moe. I’ve used it before when I’m reading books but it makes sense to have it attached to these context sentences.
Allow to select from the list of Japanese synthetic voices that the browser supports
Until now, when using the speech synthesis provided by the browser, the script would just tell the API that it wants to have a Japanese voice without further specification. Now, the user can choose from a list of all voices available in their browser.
The available options may vary between different browsers: On Windows 10, Edge offers me Ayumi, Haruka, Ichiro, and Sayaka as local voices (all with low quality results) and Nanami as online voice (very good quality). Chrome offers the same four local voices and Google 日本語 as online voice (also of disappointing quality). Firefox only finds Haruka and also does not add any online options.
Microsoft Nanami is the only voice available to me that actually manages to read はやく中に入ろう。 correctly.
If SSML is used, this value will be overridden by prosody tags in the markup.
Looking into SSML, it stands for “Speech Synthesis Markup Language”, and if I understand correctly, it would allow to use “furigana” to tell the speech synthesizer which reading to use. Sadly, SSML seems to not be supported by webbrowsers. But I stumbled upon a webpage that showcased another interesting feature: highlighting the word that is currently read. I thought this could be an interesting addition to “Advanced Context Sentence 2”.
This feature is not available for the Google Translate voice.
Gotta lookup SSML not only for Furigana, but also for pitch accent. (Still, it requires setting up a TTS server, and some subscription, which I already have Azure anyway.)
In w3c, there is <xhtml:rt role="alphabet:x-JEITA">, but not sure how well it works, nor can I find documentations, and which TTS support this one at all? (Google and Amazon appear to only have IPA and X-SAMPA.)
Where would you get Furigana data, anyway, other than handmade? Even Furigana inserter isn’t that accurate. It is of course possible by personal doing, but not sure about sharing / community.
Forgot to say about Web Speech API. It might require setting up? Adding Japanese keyboard in Windows, but much easier in Mac. For Linux it depends, but probably need espeak and some troubleshooting?
I did not say that I wanted to use SSML in “Advanced Context Sentence 2” – I was just looking into it out of general curiosity if it could potentially be used in any userscript. That said, it might also have been possible to use it in “Advanced Context Sentence 2”: For example with the context sentence “はやく中に入ろう。” for the vocab item 中, the only occurrence of 中 in the sentence would have to be read with the reading provided by WK. This could have been useful to prevent some speech synthesizers from reading it as ちゅう.
But anyway, as there is no browser support I didn’t look any further into it.
At least Google Chrome and Microsoft Edge (on Windows 10) seem to support Japanese online voices without any need for setup. I think Firefox does not and only finds one local Japanese voice; but even if the webbrowser does not provide any Japanese voices, “Advanced Context Sentence” still provides the Google Translate voice as an alternative.