Meaning synonym limit problem, writing 53 characters, getting the 64 characters waning

Hi there,

I am writing a little app to add German translations to Wanikani as user synonyms, based on the Wadoku dictionary (Doitsukani). Since some of the translations are fairly long, I truncate them to fit into the 64 byte limit. However, it seems that even at less than 64 bytes, Wanikani returns “Validation failed: Meaning synonyms 64 characters max for synonyms!”

For example, the following request fails for me, even though the longest synonym is 56 bytes according to my hex editor.

curl 'https://api.wanikani.com/v2/study_materials' \
  -H 'Authorization: Bearer ...' \
  -H 'Content-Type: application/json' \
  --data-raw '{"study_material":{"subject_id":6488,"meaning_synonyms":["laut","gemäß","mittels","~ verdanken;dienen zu ~;kommen von ~;entsprechend;st~"]}}' 

I am really calculating byte length of the UTF-8 string, not string length.

Does anyone have an idea what is going wrong here?

Kind regards,
André

2 Likes

Does the whole of meaning_synonyms submitted in one call have to be smaller than 64 characters?

2 Likes

Good point! I was wondering about it at first, but @oldbonsai mentions in Updates to Synonyms on Item Pages “no more than eight, no more than 64 characters per synonym, no blank ones, no duplicate ones”. I was also able to successfully upload

"meaning_synonyms": [ "Stube", "Gästezimmer", "Empfangszimmer", "Zimmer im japanischen Stil;mit Tatami ausgelegtes Zimmer;Zimmer…" ]

where the last meaning is 65 bytes and the entire thing is of course much longer.

Cheers,
André

2 Likes

Ok, the same curl command that returned the error yesterday worked this morning.
Maybe a server issue :thinking: Oh well, that costed a lot of time …

2 Likes

Ok, it’s me again. I am getting the same error again with less than 64 bytes based on UTF-8.

Can anyone enlighten me what character encoding the 64 bytes are based on? At least it looks like it’s not UTF-8. It’s a bit random for me.

Cheers,
André

is it possible to use it for loading other csv or json “dictionaries” for vocabulary and kanji?

Not just like this, someone would have to implement it. Currently, EDICT2 (~CSV) is expected, because that’s what a lot of dictionaries use. But there is a lot of post processing due to

  1. Errors in the particular dictionary (different for each dictionary likely)
  2. Limitations of Wanikani (same for all dictionaries)
    (doitsukani/tools/edict2parser.ts at main · eickler/doitsukani · GitHub)

Cheers,
André

Managed to do somehow with help of chatgpt :slight_smile:
Using Items id and translation values from excel converted to json and then with tamepermonkey script setting meaning_synonyms. Script took ~7h, with delays of 2s between items. :flushed:

1 Like