There is [Userscript]: Anime Context Sentences, but searching for the vocab in sentences is not humanly made, so not checked for proper results; like sometimes it’s not checked if it is outside the speech or in names.
Also, I am not sure if is possible to extract audio. Furthermore, licensing to use elsewhere, for example, Anki or some other Japanese learning websites / apps.
Not to mention that the video is sometimes not properly chopped or out-of-sync.
About the amount of context sentences, there are a little less than 3 times of the number of vocabularies here, I guess.
Thinking about the resource making. It might be more proper to find a native audio sentence database (with or without video), and try to caption by myself. It is doable just like gradually adding vocabularies to Anki.
About the process, it can be done with Speech Recognition (or a real transcript) in advance, then proofreading during individual vocabulary learning.
Actually, bunpro has a lot of voiced sentences that are also split by words and are somewhat synchronized with wanikani vocab. But it’s uncertain how soon will it be released or ported someway to wanikani if ever.
I honor the ambitions but I do not see how it will solve the problems you pointed out in the opening post. It will require the same immense amount of proofreading as the other databases.
Theres plenty of tools out there that take subtitle tracks and break up the audio of an episode for that subtitle track. If you want to make a database of native sentences with audio, that right there would give you context, audio, text, and even visuals if you wanted.
About ImmersionKit, I have grown dislike for cutesy Anime voices and goes for dramas instead. (Something like Death Note is still OK, though.) I also would consider going for news, documentaries and live actions. (Japanese songs, non-Anime ones, are ok as well, but I wonder if I should pursue them.) A part of it is, I don’t like Japanese pop culture that much any more.