What happens with radicals that don’t have a character in unicode? How would you draw those out?
I can’t display them yet but I am working on it. I think I will use something like libsixel
That will be the next feature.
What about the WK audio recordings of vocabs? That’s how I learn pitch accent, among other sources.
Playing sound from a terminal is easy. The main objective was for stealth review but we can still add it. I saw that the API returns the path of the audio files.
Just want to chime in and say that this is amazing. I will definitely try running it soon. Sorry that I haven’t been able to contribute to the code, but very impressed by what is going on!
When I first seen this thread, I was thinking about some kind of ascii blown up rendering of the text, something like aalib
Using ASCII for radicals will look like that. The main advantage is that the end user does not need to install extra libraries. I made it work with images too but this might be better.
In that example I did it for every radical in my review list. However I will only display them like that if they don’t have an utf entry.
What do you guys think?
That looks like the Matrix digital rain
There is nothing wrong with installing extra libraries. Just make sure it works for others (like every Linux).
A good compromise would be to use a library like viu when it’s available on the system and fall back to the ascii version when it is not.
I cleaned the interface a bit. There is an example with ASCII art at the end of the gif.
This is beautiful; you’re doing God’s work. I would very much so like a stealth review client. Not so much for dodging my boss’ ever-looming gaze, but more to prevent the Chinese guys in my office from laughing at me every time they see me make a mistake on my phone!
Is there anything in particular you want a hand with? If not, I’m happy to start bolting on features that I’d need to make this my main client (undo button, reorder options, an easy way to display more info after failing/passing an item etc.).
I am glad that you are willing to help.
There are a lot of things that can be done:
Better UI: We need to improve the UI so it looks better.
Read audio vocab : to play the audio on demand
Accept similar answer. e.g.: You typed “vikings” instead of “viking” or “btter” instead of “better”. I know that both wanikani and Torii SRS do that.
Handle acceptable answers: E.g.: We are looking for the vocabulary reading, not the kanji reading.
Link cards together. To know when you get the meaning wrong and the reading correct or vice versa.
Handle exceptions gracefully (when your internet connection drop, etc…)
Offline data. This needs to be done carefully. We need to limit that feature for users with an active subscription, lifetime access or in the free tier.
Improve test coverage
Improve the documentation. English is not my native language. I make mistakes.
Code refactoring. You don’t like the way I wrote something, feel free to improve it.
We will also need to work on syncing the data back to wanikani. I don’t want to mess up my stats so I want to be sure it’s working well before sending back anything. We at least need to implement the “similar answer” and “handle acceptable answers”.
I’ll probably naturally fiddle with docstrings/tests when I start playing around with the code – this is usually the first thing I do when I touch a new codebase. I’m also happy to do the usual CI admin (readthedocs, PyPI, and coverage) since I’ve done this for quite a few python projects now.
I’m happy to test sending data back to wanikani with my apprentice vocab reviews. I’ll make a fork and get stuck in once I’ve finished work!
Sounds great. Can’t wait to see your commits.
I will work on the audio file for the next features.
I added the audio feature.
You can now play the audio from wanikani through the terminal. Audio can only be played after answering a “vocabulary reading” question (like in the official app). There are different modes.
silent mode: does not play any sounds or prompt anything
autoplay : auto play sounds
default: See images below
You also can select the type of voices you want to hear:
Female : Only plays female audio
Male: Only plays male audio
Alternate (default value) : Alternates between male and female, the first gender is randomly picked.
random: Always random
I started to work on how to handle the answers given by a user.
- When the user is asked for the kanji meaning but gives the vocab meaning we now display the following message.
- When the user is asked for a meaning and makes a typo error.
For the typo error I use python difflib library. It has a method named get_close_matches. I tried for a few cases and 0.8 sounded like a good compromise. There are other libraries such as python-Levenshtein but right now I am quite happy with difflib (since it’s a built-in library).
Send me a message if you are working on a feature.
I changed quite a lot of things in the code architecture. I am not entirely satisfied but it works and it can send the data back to WaniKani.
I did my last review session (72 items) using it. I added a ‘hard mode’ that can be enabled using the ‘ – hard’ which requires the user to input all the pronunciations of a given kanji. Obviously the hard mode does not work with the meanings.
e.g.: To answer 何 you will need to input なに,なん. The order does not matter. なん, なに works well.
So far we can:
- Do reviews (Sends back review data to WaniKani)
- See character without UTF entry as ascii art
- Listen to the audio
- Configure audio preferences (male, female, random, alternate)
- Hard mode that requires the user to input all the prounonciations of a Kanji (only for the readings)
- WaniKani like input for kanas
- See summary (nb reviews, nb lessons)
- Dry mode for test purposes (does not send anything back to the API)
- Accepts spelling mistakes (for meanings).
I hope you guys like it. Feel free to improve it.
Next improvement will be on the review queue management.
I have published a new version where you can also do your reviews.
Review as a tab system where you can navigate from one tab to another through the arrow keys.
(I hid the mnemonic for the screenshot since this vocab is not in the free tier.)
I have been using the program to do all my reviews since a few days. No issue so far.
I like the hard mode that forces me to input all the pronunciations.
If you turn this into a text-based adventure where you walk around an ancient haunted castle encountering reviews, I’m there.
It might get a bit tedious if you need to go through 100 reviews to beat a boss: stuck_out_tongue:
You guys are welcome to submit a ‘gamified’ pull requests. My English is not good enough to write an interesting story.
I am thinking of adding some functionalities to review burned items.
I have been using the CLI to do all my lessons and reviews. I even leveled up using it .
It temporarily caches audio during review session (so it does not download X times the same audio).
Lesson navigation has been improved.
The sound is played in a different thread so it does not freeze the app when doing reviews.
The number of answers requested is displayed in hard mode.
E.g: 谷 kanji question has two readings: たに、や but 谷 vocabulary has only one reading たに. That’s why. I felt it is better to ask for the nb of reading requested.
Feel free to try the app. You can use the --dry-run to avoid sending back data to WaniKani if you are scared it will mess up with your reviews.
I am trying to replace wanikani tags with colors (kanji, vocabulary, ja etc…):