We are building a Japanese grammar learning application through open-ended practice and consistent memorization. Our application will use a spaced repetition system (SRS) and AI to enable our users to create sentences and receive instantaneous feedback.
User Journey: After creating an account, the user will take a placement test to determine their JLPT grammar level. After determining a JLPT level, the students will begin the learning, review, and master review workflow.
Learning Workflow: The student will first learn the new grammar points for each sublevel. The student will create one sentence given a situation for each grammar point until correct. The grammar point then gets moved into the level review pile.
Level Review Pile: According to the timing of the SRS, the student will review the available on-level grammar points in this window. The student will create one sentence given a situation for each grammar point. The studentâs SRS score for the given grammar point will be adjusted accordingly depending on whether they create a grammatically correct sentence.
Master Review Pile: Once a grammar point has been progressed to Guru and finished in Level Review, it goes into the Master Review Pile. In this pile, your practices of making sentences from situations are consistent, but with a grammar bank of 3 functionally different grammar points (one of them being the correct one). You finish the grammar points until they are burned in this practice pile.
History Page: Lists past learning/review sessions, including previously inputted sentences along with associated feedback and corrected sentences. All âsession objectsâ (sentences with feedback and corrections) will be sortable by grammar point and accessible from the practice page.
Wordbank: On the Review Pages page, appropriate vocabulary with translations for each given situation will be available. This will prevent students from mistranslating and using incorrect words when searching for vocabulary they do not know.
Weâre a small team but weâre making progress on a daily basis. We post weekly updates in the discord and are looking to hear your thoughts. We donât have a website yet, but we plan to have one up in the near future.
WaitâŚAI???
(Yes, we are aware of the shortcomings of AI, such as its occasional unnaturalness. However, we believe there is much to gain from using AI, such as correctly evaluating grammatical accuracy (especially for the N5-N3 level grammar points we are focusing on initially). To extract the best performance from our AI, we use highly engineered prompts that consider the context (situation) and the given grammar point being evaluated and then separate its evaluation of grammatical accuracy from its evaluation of naturalness. We are confident in the grammatical accuracy evaluation for N5-N3, but have also gotten promising results from the naturalness evaluation)
So what do you have? No website or app yet, so any screenshots of the WIP you can share?
What does this mean exactly? That the AI will determine whether a sentence or phrase from the user is grammatically correct / natural? Do you have a native Japanese speaker involved to validate this process?
Also, what does âunlike bunproâ in the topic title mean since they also do grammar SRS.
We do have screenshots, tho a bit outdated as I donât have the most recent localhost.
The AI will first identify whether the phrase is grammatically correct and then separately evaluate the naturalness of the language. The SRS score is incremented based on the AIâs evaluation of grammatical accuracy. We do have native Japanese speakers involved in validating outputs, and it so far has been successful in regards to N5 in particular.
While AI can be a bit of a taboo topic depending on its use (and I can understand having reservations here), I feel like itâd be a fantastic starting point for self learners, particularly those who donât have a native Japanese speaker to regularly talk with and correct sentence structure/formality issues. Is AI perfect? No, but I think it could be a decent starting point. I certainly wouldnât rely on it exclusively, but could see myself using it to start getting the hang of things.
I feel like with the current AI trend being LLMs, AI is often a terrible tool for self learners since they tend to make stuff up, but confidently present it as fact. So if you donât already know something you have no way of knowing whether you can trust the output. But as I say, thatâs really only a big problem with the LLMs. AI (specifically machine learning and neural nets) really come down to the training data.
Which raises the question: @leoluan can you give any insight into your training data? Type of data, size, etc.
When I heard unlike bunpro I had hopes that maybe it would be a comprehension based srs system rather than an output based one, but that doesnât seem to be the case.
Sean blues comments about AI sums up my thoughts pretty well though so I guess I donât have much to add. I personally didnât like bunpro, and I canât say I would see myself using this either (even if I was back at N5). Output and having AI evaluate it as part of an srs system seems very specific and dependent on the AI to actually be consistently right.
I definitely agree with this, which is why I can understand the reservations. Iâm a software developer by trade, so Iâve seen my fair share of questionable AI output haha. Youâre right that itâs very problematic when you have no way of knowing if what itâs saying is correct. That being said, Iâve also seen how helpful it can be in certain scenarios so I guess Iâm a bit of an optimist. AI is being honed at a terrifying speed, so while I donât think itâll ever be perfect, I do think it has a lot of room to grow from what we currently know.
This is key. Without some sort of way to prove the AI being used is outputting factual information (and not just a certain % of the time), then it does make the whole thing moot as much as I would love to see an idea like this be successful. But I donât think itâs impossible.
We are using a prompted GPT4o, so the training data is currently out-of-the-box GPT4o. However, we are currently compiling training data from situations, grammar points, sentence in, and feedback combinations where the feedback is done by a native Japanese teacher. We now have 300 example sets that we will fine-tune GPT4o with.
The most basic explanation would be that output asks âwhat should go in this blank?â where comprehension gives grammar in context and then asks âwhat does this mean?â
Essentially instead of the card testing if you can produce Japanese to fit a certain sentence, you would be given a sentence in Japanese and tested over whether or not you could comprehend it
So combining this with your previous statement about getting promising results for N5-N3, youâre claiming to get those results from vanilla GPT-4o? I will concede that I donât know anything about using engineered prompts, but I find this hard to believe from my previous experience testing Chat-GPT and seeing bad grammar explanations shared by others.
From my little understanding of AI, it seems like âgive me a grade for natural-ness and meaning-accuracy of this sentenceâ is more suited to what it could do, rather than correct/incorrect or âno, like thisâ. Iâm not really seeing how SRS gets wedged into here naturally without a clear way of being âwrongâ. (Ironic, because thatâs usually how I feel about AI)
It would be more like what a human teacher would do, making you produce grammatical output and giving tips. (Which someone who disagreed or was confused could then follow up with a real person later)
Which does sound interesting to me and would be different from bunpro. The way itâs described in the post doesnât compel me, but maybe Iâd try it out (because why not)
I wish bunpro was more flexible with the grammar. An output based srs is so rigid and doesnât allow you to make mistakes and it requires a certain answer.
But in the real world grammar is much more flexible and it works as long as it is comprehensible.
I quit bunpro because of this.
PS:This is just what I felt using bunpro and grammar textbooks.
Even though our SRS is output-based, we accept various answers as long as they are grammatically correct to a situation/grammar point pair. This flexibility is one of the key differences between Bunpro and us.
This is an idea we are experimenting with in Learn and On-level reviews: doing a given grammar point situation exercise and correcting your input sentence with feedback until it is correct grammatically. We will probably implement an adjusted version of this that is not too resource-intensive but allows students to make corrections in the earlier SRS stages of learning.